PRODUCTION SCHEDULING ALGORITHMS FOR SEMICONDUCTOR
TEST OPERATIONS
By
REHA UZSOY
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
university of Florida UNIVERSITY OF FLORIDA LIBRARIES
1990
As you set out for Ithaka
hope your road is a long one,
full of adventure, full of discovery.
Laistrygonians, Cyclops,
angry Poseidon — don't be afraid of them:
you'll never find things like that on your way
as long as you keep your thoughts raised high,
as long as a rare excitement
stirs your spirit and your body.
Laistrygonians, Cyclops,
wild Poseidon  you won't encounter them
unless you bring them along inside your soul,
unless your soul sets them up in front of you.
Keep Ithaka always in your mind.
Arriving there is what you're destined for.
But don't hurry the journey at all.
Better if it lasts for years,
so you're old by the time you reach the island,
wealthy with all you've gained on the way,
not expecting Ithaka to make you rich.
Ithaka gave you the marvelous journey.
Without her you wouldn't have set out.
She has nothing left to give you now.
And if you find her poor, Ithaka won't have fooled you.
Wise as you will have become, so full of experience,
you'll have understood by then what these Ithakas mean.
C.P. Cavafy
ACKNOWLE DGEMENTS
I would like to extend my sincere appreciation to Dr.
Louis A. MartinVega, chairman, and Dr. ChungYee Lee,
cochairman of my supervisory committee, for their guidance
and assistance without which this work could not have been
completed. Their excellent teamwork, their excellent advice
on matters academic and otherwise and their willingness to
sit down and reason with a stubborn TurcoScot have set me an
excellent example to follow throughout my career.
Thanks are also due to Dr. D.J. Elzinga, Dr. Sencer
Yeralan and Dr. Selcuk Erenguc for serving on my supervisory
committee and providing me with valuable assistance and
feedback as the work progressed. I should also like to thank
Dr. Elzinga for helping me start my teaching career. Special
thanks are due to Dr. B.D. Sivazlian for serving on my
committee for a time and for his support and encouragement
throughout .
I would also like to acknowledge the support of Harris
Semiconductor, which made it possible for me to work in a
realworld environment which motivated the research in this
dissertation. I would especially like to thank Mr. T. Haycock,
iii
Mr. P. Leonard, Mr. J. Rice and Mr. J. Hinchman for their
assistance and cooperation over the last two years. It has
been a pleasure to work with them.
Without the support of my family and friends this work
would never have been completed. To my parents, Nancy and
Safak Uzsoy, goes my appreciation for their unfailing
confidence, support, the excellent opportunities they have
given me and the excellent example they have set me. Special
thanks are due to my roommates over the last four years, who
have lived a good deal of the Ph.D experience with me: David
and Farah Ramcharan, Irfan Ovacik and Haldun Ay tug. Among my
friends, Elias Stassinos, Serpil Unver, Clara Azcunes and
Roberto Cavalieros deserve special mention. Finally, to Gerry
Chestnut goes my heartfelt thanks for her support and
confidence over the last difficult months, and for showing me
how much growing up I still have left to do.
IV
TABLE OF CONTENTS
ACKNOWLEDGEMENTS
1
ABSTRACT i x
CHAPTERS
I INTRODUCTION 1
Objectives of Dissertation 4
Outline of Remaining Sections 5
II PHYSICAL SITUATION 7
The Semiconductor Manufacturing Process 7
The Semiconductor Testing Process 10
Management Objectives in Semiconductor Testing .. 15
III LITERATURE REVIEW 18
Introduction 18
Scheduling Theory 18
Job Shop Scheduling 19
Branch and Bound Algorithms 2 4
Improvementbased branch and bound
algorithms 24
Conflictbased branch and bound
algorithms 2 6
Flowshop Scheduling 2 8
Heuristic Approaches 2 9
Shifting Bottleneck Approach 31
Summary 3 5
v
Single and Parallel Machine Scheduling 35
SingleMachine Scheduling 36
Parallel Machine Scheduling 4 2
Batch Processing Machines 43
Research on Semiconductor Manufacturing 4 5
Summary 52
IV MODELLING APPROACH 54
Introduction 54
Modelling of Job Shop 54
Disjunctive Graph Representation 57
Approximation Methodology 61
Step 3: Determination of Critical Workcenter .. 63
Step 4: Seguencing of the Critical Workcenter . 65
Step 5: Use of Disjunctive Graph to Capture
Interactions 66
Step 6: Reseguencing in the light of new
Information 68
Experimentation with Overall Methodology 68
V SINGLE MACHINE WORKCENTERS 70
Introduction 70
Description of a SingleMachine Workcenter 70
Minimizing Maximum Lateness 73
Algorithms for l/prec,SDST/Lmax 74
A branch and bound algorithm for
l/prec^j ,SDST/Cmax 74
Dynamic programming algorithms for
l/prec,SDST/Lmax 82
Heuristic Procedures for
l/r. } ,prec f q i ,SDST/Cmax 85
vi
A Neighborhood Search Algorithm 91
Minimizing the Number of Tardy Lots 95
A Heuristic Procedure for 1/prec, SDST/EU,. ... 96
WorstCase Analysis for 1/SDST/EUj 100
Dynamic Programming Procedures for
l/prec^SDST/SU, 103
Summary 105
VI BATCH PROCESSING MACHINES 108
Introduction 108
Assumptions and Notation 109
Minimizing Total Flowtime Ill
Minimizing Maximum Tardiness 113
Minimizing Number of Tardy Jobs 121
Parallel Batch Machines 125
Summary 12 9
VII PROTOTYPE IMPLEMENTATION OF APPROXIMATION
METHODOLOGY 131
Introduction 131
Implementation Environment 132
Implementation of Approximation Methodology 13 3
Computational Testing 13 8
Experimental Results 139
Summary and Conclusions 143
VIII SUMMARY AND FUTURE DIRECTIONS 14 5
Summary of Accomplishments 145
Single and Parallel Machines 147
Batch Processing Machines 149
VII
Overall Approximation Scheme 151
REFERENCES 155
BIOGRAPHICAL SKETCH 164
Vlll
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
PRODUCTION SCHEDULING ALGORITHMS FOR SEMICONDUCTOR
TEST OPERATIONS
by
Reha Uzsoy
August 199
Chairman: Dr. Louis A. MartinVega
Cochairman: Dr. ChungYee Lee
Major Department: Industrial and Systems Engineering
We consider a class of job shop scheduling problems
motivated by semiconductor test operations but having broad
applicability in other industries. Since the problem is NP
hard, we present an approximation methodology which proceeds
by dividing the job shop into a number of workcenters and
scheduling these sequentially. A disjunctive graph is used to
capture interactions between workcenters. The performance
measures to be minimized are maximum lateness and number of
tardy jobs.
The development of such an approximation methodology
requires efficient means of scheduling the individual
workcenters. in this dissertation we first consider
ix
workcenters consisting of a single machine. The problems of
scheduling these machines are characterized by lateness
related performance measures, seguencedependent setup times
and precedence constraints, and are thus NPhard. We provide
optimal implicit enumeration algorithms and heuristics with
tight error bounds for a number of these problems.
Another type of workcenter considered consists of batch
processing machines. A batch processing machine is one where
a number of jobs are processed simultaneously as a batch. We
present polynomialtime solution procedures for a number of
problems of scheduling workcenters consisting of single or
parallel identical batch processing machines.
Finally, we demonstrate how some of the algorithms
developed can be integrated into the overall approximation
methodology and discuss future research.
x
CHAPTER I
INTRODUCTION
The area of scheduling deals with the problem of
allocating a collection of scarce resources over time to
perform a number of tasks. While first conceived in a
production context, the areas of application of scheduling
theory have broadened over the years to include service
industries, computer system design, vehicle routing and many
others. Attempts to model and guantify the scheduling
process, starting around the turn of the century, have led
to the development of a broad body of knowledge in this
field.
The deterministic scheduling problem can be defined as
follows:
"Given a set of n 'jobs' (tasks, events, products),
that have to pass through m machines (processors) under
certain restrictive assumptions, determine the schedule that
optimizes some measure of performance."
The development of complexity theory over the last
fifteen years has provided profound insights into the nature
of scheduling problems. Due to the work of researchers like
2
Lageweg, Lawler , Lenstra and Rinnooy Kan[59,60], a complete
classification of deterministic scheduling problems is
available. This work has shown that while many scheduling
problems can be solved efficiently in polynomial time, there
are a great many others for which it is unlikely that such
good methods exist[42, 71] . For problems in the latter class,
the researcher is forced to resort to heuristics for good
solutions, or implicit enumeration methods to obtain optimal
solutions. There have also, in recent years, been a number
of attempts to apply techniques from other engineering
fields such as artificial intelligence and control theory to
solve scheduling problems. Interesting reviews of some of
these research efforts may be found in Buxey[l9] and
Rodammer and White [88].
While the benefits yielded by effective scheduling vary
depending on the area of application, it is clear from both
theory and practice that significant differences may result
from the use of different schedules. This is especially the
case in industries using extremely capitalintensive
technologies and operating in highly competitive markets.
High competition for capacity at key resources and the
importance of customer satisfaction render scheduling
decisions particularly critical in such enterprises. The
epitome of such industries today is the semiconductor
industry.
3
The miniaturization of electronic components by means
of Very Large Scale Integration (VLSI) technologies has been
one of the most significant technological developments of
the last fifty years. Steadily improving technologies and
decreasing prices have led to integrated circuits appearing
in all walks of life. The computer revolution of the past
two decades is a direct result of the ability to develop and
fabricate these components economically. Integrated circuits
can be found in almost every piece of military hardware in
use today, rendering this industry extremely important from
the point of view of national security. The development of
complex ComputerIntegrated Manufacturing (CIM) systems,
essential to the maintenance of a competitive edge in
today's volatile, global markets, is directly linked to the
availability of the integrated components of the
controllers, computers and communications equipment
necessary for their implementation. Integrated circuits are
also used in a wide range of industries, such as domestic
appliances, cars and avionics. Thus it is safe to state that
the importance of the semiconductor industry today is
comparable to, if not greater than, that of the steel
industry around the turn of the century.
Despite the widely recognized importance of this
industry, it is only in the last few years that the
operational aspects of semiconductor manufacturing companies
are being addressed and attempts being made to bring
4
industrial engineering and operations research techniques to
bear on problems in these areas. The majority of these
efforts to date, however, have focused on the extremely
capitalintensive and technologically complex wafer
fabrication process. The socalled 'back end' operations
where the chips are packaged, tested and shipped to the
customer, have remained relatively unexamined.
Objectives of Dissertation
The objective of the research described in this
dissertation is to develop and apply production scheduling
methodologies to certain job shop scheduling problems whose
structure is derived from industrial settings. The primary
motivation for the problems addressed in this proposal is
found in testing operations within a semiconductor
manufacturing facility although the classification of the
problem is generic in nature. The purpose of these
operations is the testing of the finished product to ensure
that it meets the customer specifications. Since these
operations do not add any value to the product, improvements
in productivity resulting from more effective scheduling
will reduce overhead, helping thus to reduce costs.
An important consideration throughout this research
will be the relevance of the resulting algorithms in actual
realtime testing environments.
5
Outline of Remaining Sections
The purpose of Chapter II is to provide the motivation
for the following sections. A broad overview of the
semiconductor manufacturing process is given. Test
operations are placed in this perspective and described in
detail. Insights into the physical situation will enable us
to derive the structure of the scheduling problems addressed
in this research.
The purpose of Chapter III is to place the
research proposed here in perspective to the existing body
of knowledge in the areas of both scheduling theory and
semiconductor manufacturing. The first section reviews
relevant results from scheduling theory which form a basis
for this work. The second section reviews applications of
operations research techniques to problems in semiconductor
manufacturing. Finally, the contribution of this research to
the above areas is discussed in the light of these reviews.
Chapter IV describes the modelling of the test facility
as a job shop and the methodology with which the problem
will be approached. This methodology entails decomposing the
job shop into a number of workcenters and sequencing these
individually, while capturing their interactions using a
disjunctive graph representation of the entire facility.
Chapter V presents formulations and solution approaches
to the problems of sequencing workcenters consisting of a
single machine under different performance measures. Chapter
6
VI examines problems related to scheduling batch processing
machines. Chapter VII gives results and insights obtained
from preliminary computational experience with some of the
solution procedures developed in Chapter V. In Chapter VIII
we present a summary of the accomplishments of this research
and directions for future investigation.
CHAPTER II
PHYSICAL SITUATION
The Semiconductor Manufacturing Process
The process by which Very Large Scale Integrated (VLSI)
circuits are manufactured can be divided into four basic
steps: wafer fabrication, wafer probe, assembly or packaging
and final testing. While the research in this dissertation
is motivated by the final testing stage, we will give a
brief overview of the entire process to put the testing
operations in perspective and to provide the background
information for some of the literature reviewed in Chapter
III.
Wafer fabrication is the most technologically complex
and capital intensive of all four phases. It involves the
processing of wafers of silicon or gallium arsenide in order
to build up the layers and patterns of metal and wafer
material to produce the required circuitry. The number of
operations here can be well into the hundreds for a complex
component such as a microprocessor. While the specific
operations may vary widely depending on the product and the
technology in use, the processes in wafer fabrication can be
roughly grouped as follows [18]:
8
Cleaning
The object of this operation is the removal of
particulate matter before a layer of circuitry is produced.
Oxidation, deposition, metallization
In this stage a layer of material is grown or deposited
on the surface of the cleaned wafer. Extensive setup times
are involved, resulting in machines being dedicated to a
limited number of operations.
Lithography
This is the most complex operation, as well as the one
reguiring greatest precision. A photoresistant liguid is
deposited onto the wafer and the circuitry defined using
photography. The photoresist is first deposited and baked.
It is then exposed to ultraviolet light through a mask which
contains the pattern of the circuit. Finally the exposed
wafer is developed and baked.
Etching
In order to define the circuits, in this step the
exposed part of the material is etched away.
Ion Implantation
At this stage selected impurities are introduced in a
controlled fashion to change the electrical properties of
the exposed portion of the layer. Setups may range from
minutes to hours.
9
Photoresist Strip
The photoresist remaining on the wafer is removed by a
process similar to etching.
Inspection and Measurement
The layer is inspected and measured to identify defects
and guide future operations.
This sequence of operations is repeated for each layer
of circuitry on the wafer, in some cases up to 8 or 9 times.
A detailed description of the technologies used in VLSI
wafer fabrication can be found in specialized texts on this
subject [90] .
In the next stage, wafer probe, the individual
circuits, of which there may be hundreds on one wafer, are
tested electrically by means of thin probes. Circuits that
fail to meet specifications are marked with an ink dot. The
wafers are then cut up into the individual circuits or
chips, known as dice, and the defective circuits discarded.
The assembly stage is where the dice are placed in
plastic or ceramic packages that protect them from the
environment. This entails the placing of the chip in an
appropriate package and the attachment of leads. There are
many different types of packages, such as plastic or ceramic
dual inline packages, leadless chip carriers, and pingrid
arrays. Since it is possible for a given circuit to be
packaged in many different ways, there is a great
proliferation of product types at this stage. Once the leads
10
have been attached, the package sealed and tested for leaks,
cracks and other defects, the product is sent to final test.
The Semiconductor Testing Process
The goal of the testing process is to ensure that
customers receive a defectfree product by using automated
testing equipment to interrogate each integrated circuit and
determine whether or not it is operating at the required
specifications. Product flows through the test area in lots.
Lots vary in size from several individual chips to several
thousand and are processed as a batch. Once a certain
operation has been started on a lot, every chip in the lot
must be processed. The actual sequence of operations a lot
will go through depends on the product and on customer
specification. While there is considerable variety in
process flows, a general idea of product flow can be formed
from Figure 2.1. Products are also classified by primary
product line, as digital, analog or data acquisition. The
test area is organized into cells based on this
classification.
The specific test system that a product can be tested
on depends on the type of product only. Thus, each product
can be tested only on a certain specific test system and
there is no flow of work between different testing
workcenters. Thus the sequence of one test workcenter will
affect the sequence of another only due to the interaction
11
■D >>
CD CO £
CO O c
Ll_ C
i I
CO
c
c «
r
CO
CD
Test
Insertio
nvironme
Screenir
T3
c
CO
Test
Insertioi
Visual/
Mechanic
Quality
Assuranc
LU
O 5s
c
c
e
nteste
ventor
Test
sertio
c
i
c
3
Test
sertio
Test
sertio
=3 <=
c
CD
c
c
3t
o
H
a
3
■O
o
!
a)
3
M
•H
12
at nontest operations, such as brand and burn in.
The major operations taking place in the testing
process are the following:
Brand
This operation consists of the printing of the name of
the manufacturer and other information required by the
customer, such as serial number, on the product package.
Burnin
In this operation the circuits are subjected to a
thermal stress of 125 degrees centigrade for a period of
time generally not less than 9 6 hours in order to
precipitate latent defects that would otherwise surface in
the operating environment. This is done by loading the
circuits onto boards. Each board can hold a certain number
of circuits, and each circuit requires a certain specific
board type. Once the circuits have been loaded onto the
boards, the boards are loaded into ovens. Each oven can
accommodate a limited number of boards, and certain types of
circuit may require a specific oven. It is possible to load
a number of different lots into the same oven. However, once
the burnin process has begun, it is undesirable to open the
oven to remove or insert lots. The reason for this is that
the temperature drop resulting from the opening of the door
biases the test, requiring extra burnin time for all
circuits in the oven at the time the drop occurred. Thus
once processing has begun, no lot in the oven, i.e., in the
13
batch being processed, can be removed until the entire
process is complete. Modelling of these systems as batch
processing machines will be described in Chapter VI.
Quality Assurance
At this step the circuits are checked visually for
defects like bent leads or chipped packages, and the
paperwork associated with the testing is examined to ensure
that all customer specifications have been met.
Testing
This is the central operation of the testing process,
and consists of the subjection of the components to a series
of tests by computercontrolled testing equipment at various
temperatures. Since this operation provides the motivation
for the study of several of the scheduling problems examined
in this research, we will describe this in some detail.
In order for a testing operation to be able to take
place, the following conditions must be met:
1) The Tester, the computercontrolled device that does
the actual testing, must be available. A number of testers
have separate high and lowvoltage heads, which for all
practical purposes function as independent testers.
2) The Handler, a device that transfers the individual
chips from the load chutes to the single set of contacts
connected to the tester and then to the output bin according
to the result of the test, must be available. The handlers
also bring the chips to the required temperature, if high
14
temperature (125°C) or low temperature (55°C) testing is
required in addition to roomtemperature testing. The
handler is restricted in the types of packages it can
handle, and in some cases by temperature capabilities.
3 ) The Load Boards and Contacts , the electrical devices
that form the interface between the tester and the handler,
must be available. These are also package, and sometimes
even product, specific.
4 ) The Test Software, to control the tester, must be
downloaded from a host computer to the tester and activated.
Thus, we see that the operation of setting up a tester
to test a certain type of product consists of
1) Obtaining the appropriate handler, load board,
contacts and bringing them to the tester or test head
concerned,
2) Connecting handler, contacts and load boards to the
tester,
3) Bringing the handler to the required temperature,
4) Downloading the required software.
The amount of time required for these operations may be
of the order of 45 minutes, which is significant compared to
the processing times of the individual chips. It is also
clear that the scheduling decisions can have a considerable
effect on the time spent in setup. By scheduling together
lots requiring the same temperature, for example, one can
reduce the time spent bringing the handler to the required
15
temperature. This nature of the setup operations results in
sequencedependent setup times. It is important to note,
however, that the number of distinct setup times is very
limited. The time required to change a handler, or to change
temperature from room to high temperature, for example, is
well known. Thus it is possible to characterize all possible
setup changes with less than 10 different times. This factor
will be exploited in the dynamic programming algorithms
developed in Chapter V.
r
Management Objectives in Semiconductor Testing
An example of the decision logic commonly used in
practice for scheduling test equipment is illustrated by
Figure 2.2. Lots that are late have priority over all
others. A lot is considered to be late if its due date has
passed without its being delivered to the customer. Once the
late lots have been determined, the major considerations, in
order of importance, are the handler and the test
temperature. Once a tester is in a certain setup
configuration, all lots requiring that configuration will be
processed before a change is made. If a change becomes
necessary, a change at the lowest possible level of the tree
is preferred. In the event of a number of different
requirements at a given level in the tree, the configuration
that will service the largest number of lots awaiting
processing is adopted.
16
CO
41
05
—I
c
<
t
CD
_Q
CO
^
1 ■
CO
n
CO
1
o
[X
V
G) O
C s
« ®
CO
=5 "2
(1)
c
CO 03
>
o  1 
1
>, E
C 03
< CO
D) O
C ^
—
CO —
CO
3 "2
CD
c
CO 03
>
o  1 
J
>> E
C 03
< CO
i_
2 E
3
4m*
O 03
03
1 CO
>_c
Q.
c ■*
E
2
<g
/
<T>
"\
CD
CD
C
"O
CD
c
_C
CO
v
CJ
I
J
r
►
r
A
CD
CD
CL
C
< —
cr
CD
CD
_C
t—
(J
V
/
CD
CL
E
J
CD
CD
CO
oo
V
\
CD
CD
CD
C
"O
CO
c
x:
CD
CJ
X
CD
>
CD
Q_
C
{ —
CO
CD
_c
h
o
V
j
f
N
CD
Cl
E
_)
CD
a?
co
CO
V
/
01
!
05
>,
GO
CO
aj
H
a
J3
a
w
M
o
MH
u
•H
DO
o
c
o
•H
03
■H
a
cu
a
CN
0)
n
■H
17
The decision process described above provides the
motivation for examining the performance measures of maximum
lateness and number of tardy lots. These performance
measures reflect management concerns for better customer
service through ontime delivery. Explicit consideration of
the setup times in the scheduling models developed addresses
the concerns of shopfloor personnel for reducing time spent
in setup changes.
CHAPTER III
LITERATURE REVIEW
Introduction
As indicated in Chapter I, the motivation for the
problems addressed in this dissertation stems from
particular job shop characteristics and parameters found in
semiconductor test operations. This has led to the
identification of a set of problems that are not only
meaningful and original from an application perspective but
also within the general context of the theory of job shop
scheduling. The first part of this review will focus on the
body of scheduling theory that is relevant to this research.
The second section will cover research that has been carried
out in industrial engineering and operations research
related to modelling and analyzing semiconductor
manufacturing operations in general.
Scheduling Theory
In this section we will present a review of the body of
scheduling theory that serves as a basis for this research.
We shall begin with the general job shop scheduling problem,
18
19
which is the central theme of this research. Approximate
methods of solving this problem which proceed by
decomposing the job shop into a number of workcenters and
scheduling these iteratively are examined. The subproblems
occurring in these methods lead to consideration of single
and parallel machine scheduling problems. Relevant
literature for these classes of problems is reviewed in the
following two sections.
Job Shop Scheduling
For the purposes of this research we can define the job
shop scheduling problem as follows. We are given a set of m
nonidentical workcenters, which may consist of a single
machine or parallel identical machines, and n jobs to be
processed. The sequence of workcenters which each job must
visit is known a priori. The problem is to schedule the jobs
on the workcenters so as to optimize some measure of
performance.
The classical job shop scheduling problem referred to
by that name in the literature is a special case of the
above, where each workcenter consists of a single machine
that is capable of processing only one job at a time. The
performance measure most commonly considered is the
makespan, or time elapsed from the start to the completion
of the last job. This problem is represented as J//Cmax in
the notation of Lageweg et al. [59,60], and has been shown to
20
be NPhard in the strong sense [42]. Even among NPhard
problems, it is one of the more difficult. While it is
possible to solve travelling salesman problems with several
hundred cities to optimality in a reasonable period of time,
a 10 job 10machine job shop scheduling problem posed by
Muth and Thompson defied solution for 2 years before
finally being solved by earlier and Pinson[21] in five hours
of computer time. Thus, the two main avenues of attack on
this problem have been implicit enumeration methods and
heuristics. Before discussing these approaches, however, let
us describe the representation of the J//Cmax problem as a
disjunctive graph. This representation provides useful
insights into the structure of the problem and has formed
the basis for some of the most successful solution
approaches.
Disjunctive graph representation
A disjunctive graph is a graph consisting of a set of
nodes N, a set of conjunctive arcs A and a disjunctive arc
set E. Two arcs are said to form a disjunctive pair if any
path through the graph can contain at most one of them. A
conjunctive arc is simply an arc that is not disjunctive. In
order to represent the job shop scheduling problem as a
disjunctive graph, let us introduce the concept of
operations. An operation consists of the processing of a
certain job at a certain machine. The problem of scheduling
21
the n jobs on the m machines can now be viewed as that of
scheduling of the operations associated with the jobs on the
machines. The sequence in which the jobs have to visit the
machines induces precedence constraints between operations
on the same job. Let N be the set of all operations, plus
two dummy operations representing a source (operation 0) and
a sink (operation *) respectively. Define a node i for every
operation ieN. Add a conjunctive arc (i,j) if operation i
has to be performed before operation j. Disjunctive pairs of
arcs link operations that can be carried out at the same
machine. If we let N be the set of nodes, A the set of
conjunctive arcs and E the set of disjunctive arcs, we have
now obtained the disjunctive graph G = (N,A,E) . Note that
the set of operations N and the set of disjunctive arcs E
decompose into subsets N k and E k each associated with a
particular machine k. With each arc (i,j), associate a cost
c^. which corresponds to the time it takes to complete
operation i. To illustrate this mode of representation,
consider the following example with five jobs and four
machines [1] .
Job Operation Machine Predecessor
111
12 4 1
2 3 1
2 4 2 3
2 5 4 4
22
Job
3
3
3
4
4
4
5
5
Operation
Machine
6
1
7
4
8
3
9
1
10
3
11
2
12
3
13
2
Predecessor
6
7
9
10
12
This can be represented as the disjunctive graph in
Figure 3.1.
We denote by D = (N,A) the directed graph obtained by
deleting all disjunctive arcs from G. For each E k , a
selection S k contains exactly one member of each disjunctive
arc pair. A selection is acyclic if it contains no directed
cycles. Each selection S k completely defines the precedence
relations of each operation to every other operation carried
out on machine k. Thus an acyclic selection S k corresponds
to a unique feasible sequence of the operations to be
carried out on machine k. A complete selection S is the
union of all selections S k over the individual machines k.
When we construct a complete selection, we are able to
replace the disjunctive graph G = (N,A,E) by the conjunctive
graph D s = (N, A U S) . A complete selection is acyclic if D s
is acyclic. Each acyclic complete selection S defines a
23
W
o
>
•H
41
a
a
3
a,
CO
n
3
M
■H
fa
24
family of schedules, and every schedule belongs to exactly
one such family. The makespan of a schedule that is optimal
for S is equal to the length of a longest path in D s . Thus,
the scheduling problem becomes that of determining an
acyclic complete selection S that minimizes the length of a
longest path in the directed graph D s .
Branch and Bound Algorithms
The disjunctive graph representation of the job shop
scheduling problem has formed the basis for a number of
branch and bound algorithms. These algorithms can be
classified into two broad groups. The algorithms in the
first class proceed by constructing an initial feasible
solution and then improving it by selectively reversing
disjunctive arcs. The second class of algorithms constructs
a schedule until a conflict of some kind, usually violation
of a capacity constraint at a machine, occurs. They then
branch on each of the possible outcomes of the conflict. A
similar classification of enumerative methods of solving the
job shop scheduling problem is given by Lageweg et al.[61].
Improvementbased branch and bound algorithms
One of the earliest algorithms in the first class was
developed by Balas[8]. Let S denote the set of all complete
selections and G h the conjunctive graph associated with a
selection S h . Let G' be the set of all G h such that G h is
25
circuitfree. We know from the discussion above that the
solution of the minimum makespan problem is equivalent to
that of finding an optimal selection and minimaximal path
in this disjunctive graph.
The algorithm generates a sequence of circuitfree
graphs G h e G' and solves a slightly modified critical path
problem for each G h in the sequence. Each graph G h is
obtained from a previous member of the sequence by reversing
the direction of one disjunctive arc. At each stage some
disjunctive arcs are fixed while some are free to be
reversed, but only the candidates for reversing that lie on
a critical path of the current G h need to be considered.
This, however, is only true when the arc between two nodes
is the shortest path between the two nodes. At each stage
the shortest critical path found so far provides an upper
bound, while the critical path in the partial graph
containing only the fixed arcs yields a lower bound. In
another paper [10], Balas provides another approach to the
solution of this problem where he relates it to the concept
of degreeconstrained subgraph. In [9] he extends the
disjunctive graph representation to handle parallel
machines. In [11] he characterizes the facial structure of
the polyhedra related to this problem.
earlier and Pinson[21] present a branch and bound
algorithm that makes use of singlemachine problems to
obtain bounds and various propositions which enable the size
26
of the search tree to be limited. These authors again branch
by selecting a disjunctive arc and examining each of its two
possible orientations. This algorithm has the distinction of
having been the first to optimally solve the notorious 10
job 10machine job shop problem posed by Muth and
Thompson[21] .
Confli ctbased branch and bound algorithms
Charlton and Death [2 2, 23] propose an algorithm that
uses the second approach. These authors start by considering
only the conjunctive arcs and determining the start times
for jobs on machines based on this information. They then
select a machine k on which two operations i and j are
processed simultaneously and branch by considering fixing
the disjunctive arc (i,j) in each of its possible two
directions. The lower bound at a node of the search tree is
given by the critical path in the graph containing only the
fixed arcs. The authors claim computational performance
superior to that of Balas 1 approach[8].
Barker and McMahon[13] also propose a method that is
based on branching using conflict resolution. In this
approach the conflict resolution on which the branching
takes place is based not on the conflict between two
operations but on the conflict between an operation and
several others that appear in a critical block in a
tentative schedule. The method generates a tree each node of
27
which is associated with a complete schedule and a critical
block of operations. At each node, a critical operation j is
determined. The critical block consists of a continuous
sequence of operations ending in the critical operation j .
The subproblems at each of the successor nodes are obtained
by fixing part of the schedule in the critical block. Lower
bounds are obtained by solving singlemachine subproblems
using the algorithm of McMahon and Florian[76] , which is
itself a branch and bound method.
Florian et al.[40] also propose a branch and bound
algorithm for jobshop scheduling based on the disjunctive
graph representation. This approach proceeds by determining
sets called cuts consisting of the first unscheduled
operation of all jobs. Operations are scheduled by having
all disjunctive arcs incident into the corresponding node
have been fixed. The branching mechanism of the algorithm
proceeds by selecting one operation from the cut to be
scheduled next and fixing the disjunctive arcs accordingly.
The authors prove that a graph constructed by fixing
disjunctive arcs in this manner will never contain cycles
and that the set of schedules enumerated in this way
contains the optimal solution. The lower bound is based on
the fact that each machine must perform at least one
terminal operation. Hence, a lower bound for the job shop
problem is obtained by sequencing the remaining jobs on each
machine in order of increasing earliest start time.
28
Flowshop Scheduling
The flowshop scheduling problem (F//Cmax) is a special
case of the job shop problem where work flow is
unidirectional. Since it has also been shown to be NP
hard[59,60] , it has also been approached using branch and
bound algorithms. A number of algorithms for minimizing
makespan have been developed [5,6,53,54,55,62,91].
Gupta [49], Corwin and Esogbue[29] and Uskup and Smith [92]
have examined the problem of minimizing makespan in the
presence of sequencedependent setup times. However,
performance measures other than makespan have not received
so much attention. In [54] and [55] Heck and Roberts develop
algorithms along the lines of that of Balas[8] for the
measures of performance of maximum tardiness, average flow
time and average tardiness. In order to minimize maximum
tardiness, they introduce the concept of a critical path for
maximum tardiness. This concept is then used in a manner
analogous to that of Balas[8] to decide which disjunctive
arcs are to be reversed. The average performance measures
are handled by the same type of enumeration and branching
mechanism. The difference in this case is that a sink node
is associated with each job to enable the performance
measures to be calculated easily. Hariri and Potts [52] have
developed a branch and bound algorithm to minimize number of
tardy jobs in a flowshop. However, this algorithm becomes
computationally very demanding as problem size increases.
29
Heuristic Approaches
The branch and bound methods described above all suffer
from the common fault of implicit enumeration approaches —
the exponential growth in computational effort as problem
size increases. Hence a good deal of research has been
devoted to developing heuristic procedures that obtain good
solutions with less computational burden. We shall
distinguish between two classes of heuristics: dispatching
rules, that take into account only local information at the
individual machines, and approximation methods, which take
into account the entire job shop.
There are a great many dispatching rules that have been
examined in the literature. Surveys of such rules for job
shop scheduling can be found in Baker[3], Conway et al.[28]
and Panwalkar and Iskander [81] . Dispatching rules have the
advantages of being easy to implement and explain, and will
often give good results. While this approach may be
sufficient for some cases, in job shops where there is high
competition for capacity at key resources the extra
computational effort involved in obtaining better schedules
would appear to be justified. This is the motivation for the
development of approximation methods, like the Shifting
Bottleneck Method developed by Adams et al.[l] described in
the next section.
30
An interesting application of new ideas to this problem
can be found in van Laarhoven et al.[93], who apply the
technique of simulated annealing to the job shop problem.
These authors prove that the algorithm they present
asymptotically converges to a global minimum, and finds
better schedules than heuristics at the expense of higher
computation times. Comparing their method with the Shifting
Bottleneck procedure [ 1] , these authors found that overall
the shifting bottleneck outperforms simulated annealing on
the basis of a time versus quality of solution tradeoff.
Matsuo et al.[75] also provide a simulated annealing
procedure for the J//Cmax problem. The neighborhood
structure they employ is rather more sophisticated than that
of van Laarhoven et al.[93]. Their algorithm obtains
solutions as good as those obtained by the partial
enumeration version of the Shifting Bottleneck Procedure in
comparable computation times.
Another interesting class of heuristics for job shop
scheduling has been developed recently based on the concept
of resource pricing. An example of such an approach is given
by Morton et al.[79] in the SCHEDSTAR system. This system
assigns a price to each machine based on the jobs waiting
for it, their tardiness and inventory holding costs and the
material and labor costs involved. Based on these costs a
rate of return is calculated for each job and the job with
the highest rate of return is scheduled next. The authors
31
report that this procedure performs better than dispatching
rules over a wide range of problem instances.
A number of heuristics based on approaches like
neighborhood search and repeated application of Johnson's
Algorithm for the twomachine case[3] have been developed
for the flowshop problem. Surveys and evaluations can be
found in Dannenbring[31] and Park et al.[82].
Shifting Bottleneck Approach
The basic idea of the Shifting Bottleneck (SB)
approach [1] is to give priority to the most critical
machine. At each machine a singlemachine scheduling problem
is solved to optimal ity. The results are then used to rank
the machines in order of criticality, the one with the worst
objective function value being the most critical. The
solution associated with the most critical machine is fixed
as the sequence for that machine, and the procedure is
repeated in the light of this information for the machines
not yet scheduled. Each time a new machine is sequenced,
previously scheduled machines that are amenable to
improvement are reoptimized. The procedure continues in this
fashion until no further improvements are possible.
Having presented the broad framework, let us now
present the methodology in a more formal manner. There are
two important points to consider:
 The nature and solution procedure for the single
32
machine problems that are used to determine the degree of
criticality of a machine as well as the sequence of the most
critical
 How the interactions between the individual machines
are captured using the disjunctive graph representation
There are a number of different ways that a machine can
be classified as critical for the makespan problem. Let M
be the set of machines sequenced to date, and M the entire
set of machines. Denoting a selection associated with
machine k as S k , we obtain a partial selection S = U k£M0 S k .
Let D s be the directed graph obtained by deleting all
disjunctive arcs associated with machines j , j e M \ M , and
fixing the arcs in the selection S. A longest path in D s
will correspond to a lower bound on the makespan. Thus, it
would be mathematically justifiable to define a machine as
critical with respect to a partial selection S if S k has an
arc on a longest path in D s . This, however, does not allow
us to rank the machines in order of criticality, but merely
partitions the set of machines into two subsets, critical
and noncritical.
Instead, the solution to a singlemachine sequencing
problem is used to determine the criticality of a machine.
Let us assume that we are interested in determining the
degree of criticality of machine k, given that the machines
j e M have already been sequenced. Create the problem
P(k,M ) as follows:
33
 Replace each disjunctive arc set E p , p e M , by the
corresponding selection S .
 Delete all disjunctive arcs in Ej , J e M \ M Q .
The release times and due dates for operations on
machine k are then determined using the disjunctive graph as
will be discussed shortly. The problem that results is that
of seguencing a single machine to minimize maximum lateness
with due dates and release times. This problem in turn is
eguivalent to that of seguencing a single machine so as to
minimize makespan, when each job has a release time and a
certain "tail" that represents the time it must spend in the
system after processing. These subproblems are solved using
the algorithm of Carlier[22], which is a branch and bound
procedure with good computational performance.
The release times r, and due dates f 1 associated with
operation i on machine k are determined from the disjunctive
graph obtained from P(k,M ) above by solving a number of
longest path problems. Let L(i,j) denote the longest path
from node i to node j in D T , T = U keH s k . Then
r,  L(0,i)
where node is the source node and
d.  L(0,n)  L(i,n) + d f
where node n is the sink node and d, is the processing time
for operation i. The "tail" associated with operation i can
then be calculated to be
q, = L(0,N)  f i
34
The authors exploit the structure of the disjunctive graph
to develop a longest path algorithm with a computational
effort of 0(n), as opposed to the conventional algorithms
that reguire 0(n 2 ) effort, where n denotes the number of
nodes.
The Shifting Bottleneck methodology for minimizing
makespan can now be summarized as follows:
1) Identify a bottleneck machine k among the as yet
unscheduled machines and sequence it optimally.
2) Reoptimize each machine h whose selection S h has an
arc on a longest path in D T , keeping the other sequences
fixed. If all machines are sequenced, stop. Else, go to Step
1.
Computational experience with this methodology has been
extremely encouraging. The authors carried out experiments
on problems ranging from small ones whose optimal solution
was known to larger problems with 500 operations. The
approach took on the order of one or two minutes to solve
the larger problems, even though a great many singlemachine
problems had to be solved. The authors observe that the
difficulty of solving a problem increases sharply with the
number of machines. However, increasing the number of
operations does not seem to affect the computational effort
significantly and seems to improve the quality of the
solutions. A significant fact is that the classic 10 jobs/10
machines problem that resisted solution for 2 years was
35
solved optimally by the Shifting Bottleneck procedure in
just over 5 minutes. When compared with priority dispatching
rules, the Shifting Bottleneck Procedure outperformed them
38 out of 40 times.
Adams et al.[l] have also applied the Shifting
Bottleneck methodology to the nodes of a partial enumeration
tree. This method most of the time yields better solutions
than those obtained by applying the basic approach.
Summary
In the light of this review, we feel that we are able
to state the following conclusions:
 Approximation methods like the Shifting Bottleneck
approach are the most effective solution techniques
available at present for job shop scheduling problems.
 Performance measures other than makespan have not
been extensively examined to date.
 Scheduling job shops in the presence of sequence
dependent setup times and workcenters with parallel
identical machines and batch machines has not been examined
extensively.
Single and Parallel Machine Scheduling
The effectiveness of approximation methods like the SB
approach described in the previous subsection hinges on the
ability to efficiently schedule the individual workcenters.
36
These workcenters may consist of a single machine, or a
number of parallel identical machines. In this subsection we
will review results on single and parallel machine
scheduling that will assist us in developing solution
procedures for the workcenter subproblems.
SingleMachine Scheduling
Research on the sequencing of a number of jobs through
a single processor dates back to the 1950s. In this section
we shall only review results relevant to this research.
Reviews of the basic results in this area can be found in
Baker [3], Conway et al.[28] and French [41]. Detailed
complexity classifications of these problems are given by
Lageweg et al. [59,60].
The nature of the facility motivating this study and
the management objectives involved lead us to examine
singlemachine scheduling problems with performance measures
of maximum lateness and number of tardy jobs. The problems
are characterized by the presence of release times, due
dates, precedence constraints and sequencedependent setup
times. In order to represent these problems in a concise
manner we shall extend the notation of Lageweg et al. [59,60]
to include sequencedependent setup times (SDST) . Thus, for
example, the problem of minimizing Lmax on a single machine
with precedence constraints and sequencedependent setup
times will be represented as l/prec,SDST/Lmax.
37
The problem of minimizing Lmax on a single processor
without setup times has been extensively examined. The cases
with simultaneous release times (1//Lmax and 1/prec/Lmax)
are easy to solve using the Earliest Due Date rule and
Lawler's Algorithm respectively [3 , 64] . However, the presence
of nonsimultaneous release times renders the problem
1/rj/Lmax NPhard in the strong sense [60]. Thus we see that
our problem, l/r j# prec, SDST/Lmax, is NPhard in the strong
sense even without the seguencedependent setup times.
Furthermore, we note that the special case of 1/SDST/Lmax
with common due dates is eguivalent to 1/SDST/Cmax, which is
well known to be eguivalent in turn to the Travelling
Salesman Problem (TSP) [3].
The l/rj/Lmax problem has been examined by a number
of researchers. It has been shown that this problem is
eguivalent to the problem of minimizing makespan (Cmax) on a
single machine in the presence of delivery times q. = K 
d, where K > max^d,} [71]. The optimal seguences for these
two problems are identical, and their optimal values differ
by the constant K. We shall denote this problem by
l/r^qj/Cmax. Branch and bound algorithms for this problem
have been developed by Baker and Su[7], McMahon and
Florian[76] and Carlier[20] . The latter two approaches are
closely related and both have been integrated into larger
branch and bound schemes for solving the general job shop
problem[13,21] .
38
Baker and Su[7] develop an enumeration scheme that
enumerates all active schedules. Active schedules are those
schedules in which no job can be started earlier without
delaying the start of another. Let S be the set of all jobs.
Then at time t the set Q of jobs eligible for scheduling
next is
Q = {jesrj < min{max{t,r k }+p k } keS}
This ensures that only active schedules are generated, since
if r } > max{t,r k } + p k for some job k, then k can precede j
without delaying the completion of j . The bounding rule
employed is based on the fact that the value of an optimal
schedule will not increase if job splitting is allowed. A
lower bound for all completions of a partial schedule is
obtained by seguencing the remaining jobs in EDD order
allowing job splitting.
This algorithm can easily be extended to the problem
with precedence constraints by defining the set Q to be the
set of jobs whose predecessors have been scheduled that
satisfy the condition specified above.
A more sophisticated algorithm is given by McMahon and
Florian[76]. This approach uses a heuristic to construct a
good initial solution and then generates an enumeration tree
of improved solutions. The heuristic selects the job
available at time t that has the earliest due date, breaking
ties by choosing the job with longest processing time. The
resulting schedule consists of a number of blocks, which are
39
periods of continuous utilization of the machine. The
authors define the critical job to be the job that realizes
the value of Lmax in a given schedule. Branching rules and
lower bounds are obtained by scheduling other jobs last in
the block instead of the critical job.
The algorithm of Carlier[20] is closely related to that
of McMahon and Florian[76] and also makes use of the same
heuristic. This author proves that if L is the makespan of
the schedule obtained using this heuristic, then there
exists a critical job c and a critical set J such that
min {r,.} + s p. + min {q,.} > L  p c
ieJ ieJ ieJ
and that in an optimal schedule job c will be processed
either before or after all the jobs in J. This latter
observation forms the basis of the branching rule employed.
Lower bounds are obtained by applying the heuristic but also
allowing preemption. This algorithm has excellent
computational performance, and has been integrated into
algorithms for the job shop problem developed by Carlier and
Pinson[21] and Adams et al.[l].
Potts[85], Carlier[20] and Hall and Shmoys[51] present
heuristics for the l/rj , qj/Cmax problem and analyze their
worstcase behavior. Most of these heuristics are based on
the Extended Jackson's Rule, which can be stated as follows:
Whenever the machine is free and there are one or more
available operations, sequence next the operation with
largest value of q { . The best heuristic developed so far
40
appears to be that of Potts quoted by Hall and Shmoys[51],
which has a worstcase error of onethird.
The performance measure of number of tardy jobs (SU)
is considerably more difficult to optimize than Lmax. The
problem 1//SU, can be solved in polynomial time using
Moore's Algorithm[3] . Lawler[65] extends this approach to
the V/SWjU. problem where w < Wj implies p. > p jf where w
is a nonnegative penalty for the job j being tardy. However,
the general l//T,w.\J. problem and the 1/prec/SU,. problem are
both NPhard[59,60,72] . Lawler and Moore [66] give a
pseudopolynomial dynamic programming algorithm for the
former problem, and Villarreal and Bulfin[96] and Potts and
Van Wassenhove[86] provide branch and bound algorithms. The
algorithm of Potts and Van Wassenhove uses problem
reductions derived from the knapsack problem and dominance
relations to reduce the size of the search tree. Lower
bounds are derived from the dynamic programming algorithm of
Lawler and Moore [66] and a linear programming relaxation of
the integer programming formulation of the problem.
The problem with arbitrary release times and due dates,
1/rj/EUj , is also NPhard in the strong sense [59, 60] . Kise
et al.[58] give a polynomial time algorithm to solve the
case with agreeable release times and due dates, i.e., r >
r implies d, > d, .
It is well known that the problem of minimizing
makespan on a single machine with sequencedependent setup
41
times (1/SDST/Cmax) is equivalent to the travelling salesman
problem (TSP) , which is NPcomplete[3] . Picard and
Queyranne[84] relate the problems of minimizing weighted
lateness, number of late jobs and the sum of weighted
tardiness and flowtime costs to the timedependent TSP. In
this generalization of the TSP the cost of each transition
depends not only on the locations between which it takes
place but also on the position of the transition in the
sequence defining the tour. These authors use relaxations of
integer programming formulations to obtain bounds which they
use in a branch and bound algorithm. Barnes and Vanston[14]
address the problem of minimizing the sum of linear delay
costs and sequencedependent setup costs, where the delay is
defined as the time elapsing until the job starts being
processed. They examine a number of branch and bound
algorithms and develop a hybrid dynamic programming/branch
and bound approach. Driscoll and Emmons [39] present a
dynamic programming formulation of the problem and
demonstrate some monotonicity properties of the functions
employed. A number of authors, such as Lockett and
Muhlemann[73] , White and Wilson[98] and Irani et al.[57]
have also developed heuristics. These heuristics generally
entail some analysis of the setup operations and the
approximate solution of the resulting TSP.
42
The problems of minimizing Lmax or SU, with
sequencedependent setup times (1/SDST/Lmax, 1/SDST/SU,) do
not seem to have been extensively examined. Monma and
Potts [78] present a dynamic programming algorithm and
optimal ity properties for the case of batch setups, where
setups between jobs from the same batch are zero.
Parallel Machine Scheduling
Lageweg et al. [59,60] give a detailed complexity
classification of results in parallel machine scheduling
without preemption. From this classification it appears that
only problems with unit processing times can be solved in
polynomial time. The problems P2/r,, d, pl/L^, P/r., d,
Pj^l/SWjTj. and P/rj, dj, Pjl/SWjUj are among these [63]. The
other problems in this area are either open or NPhard.
Considerable effort has been devoted to the development and
analysis of heuristics for these problems [26, 37] .
The problem of scheduling parallel machines in the
presence of sequencedependent setup times has also been
addressed by a number of researchers. Geoff rion and
Graves [43] examine the problem of scheduling parallel
production lines in the presence of changeover costs and
formulate it as a quadratic assignment problem. Wittrock[99]
presents a heuristic for minimizing total completion time on
a set of parallel identical machines where there are two
types of setups: "family" setups, which are more time
43
consuming and are incurred when the product being run
changes drastically, and product setups due to changes from
one product to another in the same family. Computational
experience is reported and a lower bound for the optimal
solution derived. Dietrich [38] examines the problem of
determining schedules that are efficient with respect to
both makespan and flow time for the case of parallel
unrelated machines with sequencedependent setups. An
integer programming formulation is presented and a heuristic
algorithm developed. Parker et al.[83] use a vehiclerouting
algorithm to solve the problem of minimizing total setup
costs on parallel processors.
Batch Processing Machines
A batch processor is defined to be a machine where a
number of jobs can be processed simultaneously as a batch.
The processing time of a batch is equal to the longest
processing time among all jobs in the batch. Once processing
is begun on a batch, no job can be removed from the machine
until the processing of the batch is complete. These
problems are motivated by burnin operations in the
semiconductor industry, where lots of chips are placed in
ovens and subjected to thermal stresses for an expended
period of time in order to bring out latent defects leading
to infant mortality before the product goes to the customer.
The scheduling of batch processors does not seem to have
44
been extensively examined in the deterministic scheduling
literature to date. Ikura and Gimple[56] provide an 0(n 2 )
algorithm to determine whether a feasible schedule (i.e.,
one where all jobs are completed by their due date) exists
for the case where release times and due dates are agreeable
and all jobs have the same processing time. In the event of
a feasible schedule existing, their algorithm finds the one
with minimum finishing time. Bartholdi[15] examines the
problem of minimizing makespan on a single batch processor.
He shows that successively grouping the B longest jobs into
a batch will minimize makespan for the case where all jobs
are available simultaneously. Ahmadi et al.[2] examine the
problems of minimizing mean flow time and makespan in
flowshops consisting of batch and unitcapacity machines,
assuming that all jobs reguire the same processing time on
the batch machine. They provide polynomialtime algorithms
for a number of cases and provide NPcompleteness proofs and
heuristics for others.
Related problems seem to have been more extensively
examined from a stochastic perspective. Neuts[80] considers
a case where customers are served in groups. Service can
only start if a certain number of customers are waiting, and
the number of customers which can be served together is
limited. He examines system characteristics such as the
output process, gueue length and number of customers served
over a long period of time. Medhi[77] examines the
45
distribution of waiting times in this system when service
times are exponential. Deb and Serfozo[36] use dynamic
programming to minimize expected total or average costs.
Concluding this subsection, it emerges that the
problems in the areas of single, parallel and batch machine
scheduling of the types examined in this research have not
been examined extensively in the literature and a great many
of them are NPhard.
Research on Semiconductor Manufacturing
Despite the everincreasing role played by the
semiconductor industry in worldwide technological
development it is only recently that semiconductor
manufacturing systems have attracted the attention of
industrial engineering and management science researchers.
One of the earliest articles available in the published
literature is that of Burman et al.[l8] which discusses
methods of using various operations research tools to
enhance productivity in wafer fabs. They compare the
usefulness of simulation, deterministic capacity models and
queueing models. Simulation models can be developed to
model an entire production operation with a view to
answering many potential questions, or as smaller models to
address specific issues. However they point out that
considerable effort is needed to develop the model and to
analyze the output. Deterministic capacity models are used
46
for capacity estimation purposes, are easy to develop and
quick to run but limited in the range of questions they can
address. Queueing models can be developed that can be used
to examine a broader set of issues than the deterministic
capacity models, but the mathematical assumptions they make
tend to render them inaccurate representations of the
physical system. The authors then proceed to give an example
for the application of each technique in a wafer fab
environment.
Considerable effort has gone into the development of
simulation models for wafer fabs and their use to analyze
the effects of different control strategies. Dayhoff and
Atherton [32,33,34,35] have developed such a model and used
it to analyze the performance of wafer fabs under different
conditions. Their approach is based on modelling a fab as a
special type of queueing network. Similar approaches, namely
the modelling of the wafer fab as a network of queues and
the subsequent use of a simulation model, are followed by
Wein[97], Glassey and Resende[44 , 45] and Lozinski and
Glassey[74]. Wein[97] evaluates the effect of scheduling on
the performance of wafer fabs, taking cycle time as the
measure of interest. He examines two different types of
control strategy: regulation of input, where the number of
lots started into the fab is controlled, and sequencing of
lots at the individual stations. He observes that input
regulation yields larger improvements than sequencing at the
47
individual stations, and that the effects of sequencing
rules depend heavily on the number and location of the
bottleneck stations and the specific input regulation
mechanism involved.
Glassey and Resende[44, 45] point out that due to the
extensive use of ComputerIntegrated Manufacturing (CIM)
systems such as the COMETS system [27] dispatching decisions
at the individual stations and lot release decisions
governing the release of work to the fab can be made based
on more global information. Similarly to Wein[97], they
examine the effects of input regulation mechanisms, assuming
they have a single bottleneck workstation in a fab with a
single product and constant demand rates. They develop a
rule for input regulation which attempts to release work
into the fab so that it will arrive at the bottleneck
station just in time to stop it from starving. The authors
compare this strategy with a number of others and report
favorably on its performance, which is measured based on a
tradeoff between cycle time and throughput. Lozinski and
Glassey [74] discuss implementation issues. Leachman et
al.[68] further develop this approach by removing the need
for a priori bottleneck determination. Spence and Welter [89]
use a simulation model to examine the performance of a
photolithography workcell based on a throughputcycle time
tradeoff.
48
Chen et al.[24] develop a queueing network model of a
research and development wafer fab operation. A network in
which a number of different types of customer, corresponding
to different lot types, are present is presented. The model
is of a mixed nature, that is, open for certain classes of
customers and closed for others. After defining parameters
such as expected number of visits to each station and
station service rates, an iterative procedure is employed to
arrive at throughput rates for the entire network and other
quantities of interest such as average throughput time per
customer at each station. The results obtained from the
model are compared with actual observed data and found to be
in close agreement.
Bitran and Tirupati[16, 17] describe a scheduling system
for a facility manufacturing epitaxial wafers. They model
this facility as a singlestage parallelmachine system, and
propose a number of heuristics with a view to optimizing two
criteria, makespan and total tardiness. They also examine
product mix and the associated problem of assigning product
families to reactors so as to achieve a more homogeneous
product mix. This is formulated as a convex program. They
recommend various different heuristics for different cases,
and observe that when the jobs are preprocessed by assigning
product families to reactors a priori, simpler heuristics
give results comparable to the more complex procedures they
develop.
49
As can be seen from the above review, the majority of
the approaches to scheduling of semiconductor manufacturing
facilities are of the nature of input regulation mechanisms
and dispatching rules at the individual stations. A
significant exception is the work of Bartholdi et al.[15]
which is related to the work in this dissertation. The most
important part of these authors 1 work is their application
and extension of the Shifting Bottleneck (SB) approach of
Adams et al.[l] to wafer fabrication operations. These
authors model the wafer fab using the SB approach and extend
the basic model in various ways to include parallel
identical machines and batch processing machines. To model
parallel identical machines, they start from the observation
that a sequence for a single machine k corresponds to a path
connecting all nodes in the associated selection S k . A
schedule for a workcenter with m parallel identical machines
will then correspond to m disjoint paths, each one
corresponding to the sequence for one of the m machines.
Each node corresponding to an operation has to be visited by
one and only one path. Batch processing machines are
represented using stars. An nstar is a graph with n1 arcs
containing a node c, called the center, which is adjacent to
all other nodes, called satellites. If a batch can be
processed at a workcenter, this can be represented by a star
with a center corresponding to the operation with the
longest processing time. The costs of the arcs leaving the
50
satellites are set to 0, and the costs of the arcs leaving
the center are set to the longest processing time in the
batch. In the case of several batches being available, there
will be as many stars as batches. This assumes, however,
that the assignment of operations to batches is already
known .
Leachman[67] gives a corporatelevel production
planning model for the semiconductor industry. He divides
the manufacturing process into the stages of fab, probe,
assembly and test, linked by inventories. The model may
include multiple facilities, and treats entire production
processes in each plant as integral entities. Products
undergoing the same process at each stage are aggregated
into families. Computerized routines create the input files
of an aggregate planning model and then generate the linear
programming formulation. The solution to this linear program
yields a production plan at the process level of detail,
which is then validated by management. If it is invalid, due
to some resource being overutilized for instance, the input
data are revised and the process repeated until an
acceptable plan is generated. Once an aggregate plan has
thus been obtained, it is disaggregated by solving a number
of linear programs to divide the volume of production
planned for each product family over the individual
products. This model has been used by a number of
manufacturers in the industry.
51
A number of commercial software systems for the
planning and control of semiconductor manufacturing
operations have been developed. One such system widely used
in industry is COMETS [27], marketed by Consilium, Inc. and
used by a number of leading companies. This system is
composed of a number of different modules, and is designed
as an integrated plant management system, with all the
different groups involved in the manufacturing process being
supported by the same database. The main modules of interest
to this study are the WorkinProcess (WIP) tracking module,
the Activity Planner/Dispatch (AP/D) module and the Short
Interval Scheduling (SIS) module. Other modules such as
engineering data collection, factory communications and on
line specifications are also available.
The ShortInterval Scheduling (SIS) module of COMETS
gives the user realtime scheduling capabilities. The
process is modelled using the concept of dispatch stations,
which are essentially points in the process where inventory
accumulates and a scheduling decision is reguired. SIS
enables the user to develop his own dispatching rules. This
is done by defining a set of priority classes, with a strict
hierarchy, and then using rules to define the conditions
under which a lot may be a member of a class. Lots at the
dispatch station are prioritized according to the status of
the system at the moment the reguest for the dispatch list
was made, and the operator selects the lot with the highest
52
priority for processing. The module makes information like
machine status (up/down, setup) at the dispatch station
itself or a subsequent station, time spent by a lot at the
station and setups required by lots awaiting processing
available to the user. Detailed information on this software
module can be found in Consilium[27] .
Thus, the scheduling technology present in this system
is the classical dispatching rule, using mostly local
information in the immediate environs of the dispatch
station and not using global information at all.
Summary
In this chapter we have reviewed the current body of
knowledge in the areas of scheduling theory and its
applications in semiconductor manufacturing. We shall now
examine the contributions of the research in this
dissertation to these areas.
The problems of scheduling the job shop to minimize
maximum lateness and number of late jobs are NPhard and
have not been extensively studied. The excellent
computational performance of the SB methodology for
minimizing makespan would suggest that similar approximation
methods will rapidly yield good solutions for these
problems. The development of such methods contributes
significantly to the area of job shop scheduling. The
extension of the job shop model to include multiplemachine
53
workcenters and batch processing machines also extends
modelling capabilities in this area.
The single, parallel and batch processing machine
problems that constitute the subproblems in an approximation
approach are also of considerable interest and have not been
examined extensively in the literature. In Chapter V exact
and heuristic solution procedures for the problems of
minimizing maximum lateness and number of tardy jobs are
developed. The worstcase analysis of the heuristics
developed is the first such analysis known to the author for
problems of this type. In Chapter VI the problem of
scheduling batch processing machines for a number of
different performance measures is examined. Optimal solution
procedures and heuristics are presented, together with a
complexity classification of these problems.
The problem of scheduling in the semiconductor industry
seems to have been addressed mainly through dispatching
rules. The ultimate goal of this research is the development
of algorithms capable of being incorporated into a decision
support tool to assist shopfloor personnel in realtime
decisionmaking. This would constitute a significant
improvement over available commercial scheduling systems,
which are based solely on dispatching rules. The
consideration of the status of the entire job shop should
yield considerably better schedules, especially for
bottleneck resources.
CHAPTER IV
MODELLING APPROACH
Introduction
In this chapter we shall formulate the problem of
scheduling a semiconductor test facility as a job shop
scheduling problem. We shall then present an approximate
solution methodology for this problem similar to the Shifting
Bottleneck (SB) methodology of Adams et al. [1] for the J//Cmax
problem described in the previous chapter. The basic SB
methodology is extended in a number of ways to be able to
address the type of job shop under study. Chapters V and VI
develop algorithms necessary to solve the local problems in
the approximation approach, and a prototype implementation of
the approximation scheme is described in Chapter VII.
Modelling of Job Shop
In a semiconductor testing facility, product moves
through the area in lots, which vary in size from a few
individual chips to several thousand. Once processing of a
lot has begun, it cannot be interrupted until the whole lot
has been completed. Processing takes place at a number of
workcenters, generally consisting of one or more identical
54
55
machines. The machines may be testers, branders or burnin
ovens, to name a few. These machines differ considerably in
scheduling characteristics. For example, testers have
sequencedependent setup times, while branders do not. Test
systems and branders can process only one lot at a time, while
burnin ovens can process a number of lots together as a
batch.
Hence a natural way to model a semiconductor test
facility as a job shop scheduling problem is to model each
lot of chips as a job, and each group of similar machines
scheduled as a unit as a workcenter. Note that this is a
somewhat more general problem than the classical job shop
scheduling problem discussed in Chapter III. The common
assumptions in this problem are that each machine is visited
by each job only once, that each machine can process only one
job at a time and that setup times are not sequencedependent.
In the semiconductor test facility that provided the
motivation for this study, however, there are several
differences from this model:
 The presence of different types of workcenters, some
consisting of multiple identical machines, some of a single
machine and some of one or more batch processing machines,
where a number of jobs are processed together as a batch.
 The presence of sequencedependent setup times at some
workcenters .
 The performance measures being related to lateness
56
instead of makespan.
 The possibility that a given job may return to a
certain workcenter more than once (reentrant work flows) . For
example, if a lot of chips has to be tested at three different
temperatures, all three operations are carried out at the same
test workcenter. This also results in the presence of
precedence constraints between operations at a given
workcenter, a complication not present in the classical
J//Cmax problem.
Recall from Chapter III that the disjunctive graph
representation of the job shop scheduling problem has formed
the basis for many solution approaches. From the point of view
of this research, the most important application is the use
made of it in the SB methodology to capture interactions
between different workcenters as the methodology proceeds and
a complete job shop schedule is built up. We shall now give
the disjunctive graph representation of the job shop defined
by a semiconductor test facility and describe how it is used
to capture interactions between individual workcenters.
Throughout the rest of this chapter, we will make the
following assumptions:
 All handlers, load boards , contacts and operators are
freely available at all times
Operations on the same lot have a strict precedence
relation which is known a priori. Once processing on a lot
has started, the entire lot has to be completed.
57
 All the process times and sequencedependent setup
times are available and deterministic.
Disjunctive Graph Representation
In order to construct the disjunctive graph
representation of the job shop, let us first consider the case
of a workcenter consisting of a single machine. The following
notation will be used:
ij = operation i of lot j
P,j = processing time for operation i of lot j
s ij,ki = setu P time required for change from operation i
of lot j to operation k of lot 1 on the workcenter
Let us now construct the disjunctive graph representation
of the workcenter as follows. Assume there are N operations
to be processed at the workcenter. Add a source node 0, and
associate a node ij with each operation j to be carried out
on lot i at the workcenter. With each lot i to be processed
at the workcenter, associate a sink node i* that represents
the completion of that lot. This is similar to the approach
used by Heck and Roberts [55] for average tardiness
minimization in flowshops. Define the arc set as follows:
Associate a conjunctive arc (ij,kl) between pairs of
operations ij and kl where ij must precede kl at the
workcenter. Each of these arcs represents a precedence
constraint between the two operations corresponding to the
nodes at each end. Add a conjunctive arc (0,ij) from the
58
source node to all nodes representing operations ij having no
fixed predecessor, and another conjunctive arc (ij,j*) from
all nodes ij representing the final operation ij on lot j to
the sink node.
Associate a pair of disjunctive arcs between all pairs
of nodes (ij,kl) that correspond to operations that can be
carried out at the workcenter and have no precedence relation.
With each arc, conjunctive or disjunctive, associate a
cost Cjj. kl defined as
c U,ki = Pij + s ij,ki
Assume p 0j = o for all j . The sequencedependent setup times
are thus taken into account. All process and setup times
associated with the sink nodes i* are assumed to be zero. An
example of a workcenter with three lots is shown in Fig. 4.1.
Arc costs are omitted for the sake of clarity. The first lot
has two operations, represented by nodes 11 and 21, while two
other lots have one operation each. Notice that each path from
source to sink consisting of only conjunctive arcs corresponds
to a lot. Operation 11 has to be carried out before operation
21, hence the conjunctive arc between nodes 11 and 21. The
possible sequences of operations for this workcenter are
described by the pairs of disjunctive arcs. Each sequence for
the workcenter corresponds to a selection of exactly one of
each disjunctive pair. The sequence of operations 112112
13, for example, is represented by the graph in Fig. 4. 2.
59
®
©
M
0)
ii
(3
0)
a
M
U
o
at
M
O
■+a
Jti
0,
ns
N
O
>
•H
u
o
3

u
50
■H
60
u
a)
u
G
0)
a
M
U
o
w
c
3
n
cu
.c
u
Cfl
P
o
■H
41
ct!
4J
c
en
cu
a.
£
CO
CM
<r
0)
ti
3
Si)
TI
61
In order to represent the entire job shop as a disjunctive
graph, we represent each workcenter in the manner described
above. However we no longer define a source and sink node for
each workcenter. Instead the nodes that would be linked to the
source at each workcenter are now linked to nodes
corresponding to operations on that lot at preceding
workcenters. We create a source node for the entire facility,
to which all nodes corresponding to operations with no
predecessors are linked, and again associate a sink node i*
with the completion of the final operation on each lot i.
An example for a job shop with two workcenters is shown
in Fig. 4. 3. Operations 11,21,12 and 13 take place at the first
workcenter, while 31, 22 and 23 take place at the second. Lots
must be processed at the first workcenter before they can be
processed at the second. Nodes 1*, 2* and 3* denote the
completion of the lots.
Approximation Methodology
Now that we have formulated the problem of scheduling a
semiconductor test facility as a job shop scheduling problem
and have shown how it can be represented using a disjunctive
graph, we are ready to present an approximation methodology
for its solution similar to the SB methodology of Adams et
al.[l]. The approach may be outlined as follows:
1) Divide the job shop into a number of workcenters
numbered 1, . . . ,m that have to be scheduled. Let M be the set
62
o
js
w
■a
o
o
P.
ca
O
a)
>
•H
4J
U
S
3
Q
pl
ft
u
p
faO
63
of workcenters, and M the set of all workcenters that have
been sequenced. Initially, M = <p.
2) Represent the job shop using a disjunctive graph.
3) From among the nonsequenced workcenters k e M \M ,
determine the most critical workcenter j .
4) Sequence the critical workcenter j. Fix the selection
of disjunctive arcs Sj corresponding to this sequence. Set M
= M U {j}.
5) Use the disjunctive graph representation to capture
the interactions between the workcenters already scheduled
and those not yet scheduled.
6) Resequence those workcenters that have already been
sequenced using the new information obtained in Step 5. If VL
= M, stop. Else, go to Step 3.
The main body of the methodology is contained in Steps
3 through 6. We shall now discuss each of these steps
individually.
Step 3; Determ ination of Critical Workcenter
The objective of this phase is to determine which
workcenter is most critical, in the sense that a poor schedule
on that workcenter will result with high probability in a poor
overall schedule for the job shop. For this stage, Adams et
al.[l] use the optimal solution to a relaxed problem which
ignores machine interference between machines not yet
sequenced. Since all of their subproblems are of the same type
64
and are solved to optimal ity, this is a good indicator since
all machines are compared equally.
In the case of the job shops under study here, this issue
becomes more complicated. While extremely fast branch and
bound algorithms are available to solve subproblems for
workcenters without setup times, such methods are not yet
available for the case where sequencedependent setups are
present. This would seem to force us to use heuristics to
obtain solutions to the relaxed problems for this type of
workcenter, thus losing the common denominator of optimality
present in the case of Adams et al.[l].
One possibility is to try and ensure equitable
comparisons between the different types of problems by using
heuristics with comparable performance to evaluate each
different type of workcenter problem. In this case we would
define performance in terms of average or worstcase
performance. The prototype implementation described in Chapter
VII uses this approach.
An interesting point is that although in their
methodology Adams et al.[l] use the algorithm of Carlier[20]
both to make criticality decisions and to sequence the
critical workcenter, there is no apparent need to do so in the
case of the more general shops under consideration in this
research. In the case of Adams et al.[l] the use of the same
algorithm for both purposes is extremely logical, since all
subproblems are of the same type and the optimal sequence of
65
the critical machine is available at the end of the first
stage anyway. However, in view of the intrinsically more
difficult subproblems considered in this study, it might make
sense to use a fast heuristic to make criticality decisions
and a more computationally intensive algorithm to sequence the
critical workcenter as well as possible. This makes even more
sense when we note that the problems relating to criticality
decisions have to be solved for each unscheduled machine,
while the problem of sequencing the critical machine need only
be solved for that one machine at each iteration of the
general methodology.
Step 4; Sequenc ing of the critical workcenter
This phase consists of finding a good, preferably optimal
sequence for the critical workcenter, fixing it and modifying
the constraints such as finish times and release times on
other machines according to the results obtained. An
algorithm used at this stage should ideally be fast from a
computational point of view and generate solutions whose
deviation from the optimal could be bounded within a
reasonable interval. Extremely efficient branch and bound
algorithms are available in the literature for the cases
without sequencedependent setup times. In the next chapter
we present optimal and heuristic algorithms for singlemachine
workcenters with sequencedependent setup times. How some of
these algorithms can be incorporated into the approximation
66
methodology is illustrated in the prototype implementation in
Chapter VII.
Step 5: Use of Disjunctive Graph to Capture Interactions
Note that when a certain subset of the workcenters have
been sequenced, certain constraints are imposed on the
sequencing problems for the remaining workcenters. Jobs will
become available for processing at certain times (release
times) depending on how the previous workcenter is scheduled.
It is also important to have estimates of the time by which
an operation must be completed on a particular workcenter
(operation due dates) in order to allow the lot it is
performed upon to complete on time. These operation due dates,
in turn, form the input to the algorithms used to determine
and schedule the critical workcenter.
If we fix the disjunctive arcs associated with the
sequences of workcenters already sequenced, we can estimate
the release times and operation due dates for operations by
performing a number of longest path calculations in the
resulting directed graph, in a manner analogous to calculating
early start and latest finish times in a CPM problem[3]. If
we denote by L(ij,kl) the length of a longest path from ij to
kl in the directed graph described above, the release time,
i.e., the earliest start time, of operation ij is given by
r fJ = L(o,ij)  s kljJJ
where kl is the operation preceding ij on the longest path
67
and the operation due date d.. by
d U " d i " L(ij,i*) + P,j
Both these expressions use the longest path operator
L(ij,ik) to estimate the time that will elapse between the
start of operation ij and the completion of operation ik. Note
that in most cases this will underestimate the actual time
needed, since it will ignore machine interference effects at
the machines not yet scheduled. A similar approach for the
estimation of operation due dates from job due dates has been
used by Vepsalainen and Morton [ 94 , 95] and Baker [4]. An
extensive survey of the literature on due date estimation can
be found in Cheng and Gupta [25].
Thus the graph representation is used to capture
interactions between the different workcenters. Each time a
workcenter is sequenced, the due dates and release times of
operations on other workcenters are updated in order to
include the constraints imposed on the entire system by the
sequencing of that machine.
It is clear from the above discussion that the solution
of the longest path problems required to set up the local
problems at each iteration will form a major part of the
computational burden of the approximation methodology. Adams
et al.[l] have developed a longest path algorithm that
exploits problem structure and has 0(n) complexity as opposed
to the 0(n ) complexity of conventional longest path
algorithms. This algorithm must be extended to the case where
68
parallel identical machines and batch processing machines are
present. However, when each workcenter consists of a single
machine it results in substantial savings in computation time.
Step 6: Resecru encina in the light of new information .
This step consists of resequencing the workcenters that
have already been sequenced in the light of the constraints
imposed on them by fixing the schedule of the latest scheduled
machine. The main point here is that it may not be necessary
to resequence all machines already sequenced. Some machines
may not interact at all with the newly scheduled machine, and
thus the sequence on this machine will not affect them at all,
while others may be affected only insignificantly. What is
needed here is some way of determining what machines are the
most important to resequence, taking into account the
structure of the job shop and other relevant information.
Heck's extension of the concept of a critical path to
lateness [54] may form the basis of an approach to this.
Experimentation with Overall Methodology
The development of an efficient methodology based on the
Shifting Bottleneck concept for the types of job shops studied
here clearly requires a good deal of empirical work. The
methodology itself will consist of a combination of heuristics
and optimization algorithms to make the criticality decisions
and sequence the critical workcenters. Other heuristics may
69
be used to determine which workcenters should be resequenced
at the end of each iteration. The sensitivity of the overall
procedure to the various procedures used for each of these
purposes needs to be extensively investigated.
The empirical analysis of such a complex approximation
procedure for a large combinatorial optimization problem poses
interesting difficulties. First of all, one issue is the
determination of how good are the results of the approximation
scheme. Comparison of the results of the approximation
procedure and dispatching rules, which currently constitute
the state of the art in most industry environments, is one
approach. This would allow a statistical comparison of the
procedures to be made [47]. Estimation of how close to the
optimum results are, however, is more difficult due to the
fact that obtaining optimal solutions to realistic problems
is prohibitively timeconsuming. For this purpose, use of
statistical techniques to estimate optimal values offers one
avenue of approach. Such techniques have been developed and
documented by Dannenbring[30] and Golden and Alt [46]. The
overall goal is configure a specific methodology for the type
of job shop under consideration, specifying what algorithms
to use at each step for each type of subproblem, in order to
arrive at a robust way of obtaining good solutions.
CHAPTER V
SINGLE MACHINE WORKCENTERS
Introduction
In the previous chapter the main methodology with which
the problem of sequencing the job shop under study would be
attacked was outlined. This methodology requires repeated
solution of subproblems related to the sequencing of
individual workcenters.
In this chapter, problems motivated by the modelling of
workcenters consisting of a single machine will be
formulated as scheduling problems and methods for solution
presented.
Description of a SingleMachine Workcenter
These problems were motivated by the need to schedule
workcenters consisting of a single tester. A number of lots,
some of which may require more than one operation with
different setups, need to be processed at the workcenter. If
a lot requires more than one operation, there are strict
precedence constraints between them defining the order in
which they have to be performed. Note that a special
precedence structure results since there are no precedence
70
71
relations between operations on different lots. Thus, the
problem becomes that of sequencing a number of "strings" of
operations which must be processed in the order suggested by
the precedence constraints but not necessarily in immediate
succession. An example of the precedence graph for a single
workcenter problem with three different lots is shown in
Figure 5.1.
All operations on the same lot have the same due date.
The measures of performance we wish to optimize are
functions of the completion times and due dates of the lots,
not of the individual operations. The performance measures
of maximum lateness of a lot and number of tardy lots will
be examined in this research.
Due to the nature of the production technology, the
sequencedependent nature of the setups is explicitly
considered.
Let us define the following notation for the single
workcenter problem:
n = number of operations to be scheduled
m = number of lots to be scheduled
N = set of operations to be performed at the
workcenter
ij = operation i on lot j
P,j ■ processing time of operation i of lot j on the
workcenter
d. } = due date of operation i of lot j
72
CD
M
3
u
o
3
u
■u
<u
y
c
01
a
0)
o
0)
H
m
M
M
•H
73
s ij,ki = set up time necessary to change from operation i
of lot j to operation k of lot 1 on the workcenter
rj » the time the lot j becomes available at the
workcenter, i.e., the release time of lot j
In order to integrate the subproblems into the main
approach, it is necessary to include release times r,. These
times represent the time the lot arrives at the workcenter
from previous processing steps. However, in order to gain
insight, we shall first relax the release times. In a later
section we shall examine heuristics for the case with non
simultaneous release times.
Minimizing Maximum Lateness
The first problem we shall examine is that of
minimizing maximum lateness (Lmax) of a lot. This can be
stated as follows:
"Minimize the maximum lateness of a lot in the presence
of precedence constraints and sequencedependent setup
times. "
Extending the classification of Lageweg et al. [59,60],
this problem will be written as l/prec,SDST/Lmax, where SDST
denotes the presence of sequencedependent setup times.
Recall from Chapter III that this problem is NPhard. Thus
the approaches left open to us are the development of
implicit enumeration methods, or the design of heuristics.
74
Algorithms for 1/prec.SDST/Lmax
In this section we shall present two algorithms
developed to obtain solutions to l/prec,SDST/Lmax. The first
is a branch and bound approach that makes use of the fact
that l/prec,SDST/Lmax is equivalent to the problem of
minimizing makespan in the presence of delivery times,
l/preCjq^SDST/Cmax. The second is a dynamic programming
algorithm which exploits the special structure of both the
precedence constraints and the setup time matrix.
A branch and b ound algorithm for 1/prec, q  ,SDST/Cmax
Recall from Chapter III that the l/prec,SDST/Lmax
problem can be transformed into an equivalent problem of
minimizing Cmax in the presence of delivery times,
l/prec,qj ,SDST/Cmax. In this section we describe a branch
and bound algorithm to find optimal solutions to
1/prec , q } , SDST/Cmax .
Following the approach of Carlier[20], with each
feasible sequence for this problem we can associate a
directed graph G  (X,U) . The node set X consists of a node
for each operation ij carried out on the workcenter, plus a
source node and a sink node *. The arc set consists of
three types of arcs, u, , U 2 , and U 3 defined as follows:
U, = the set of arcs (0,ij) whose cost is equal to
except for the first operation ij in the sequence, for which
it is s 0f1J ,
75
U 2 = the set of arcs (ij,*) with costs p n + g fj ,
U 3 = the set of arcs (ij,kl) where ij immediately
precedes kl in the sequence. These arcs have costs equal to
Pfj + s ij, k r
The maximum lateness of a feasible sequence is equal to
the lenqth of a longest path in the associated graph G(X,U) .
An example of such a graph is shown in Figure 5.2. The nodes
have been numbered according to their occurrence in the
sequence, with [i] representing the i'th operation in the
sequence corresponding to this graph. Another important
property of this graph is that the node corresponding to the
operation with completion time equal to Cmax will be the
node immediately preceding * on a longest path.
Hence the problem of minimizing Cmax can be viewed as
the problem of finding a sequence such that the length of
the longest path in the corresponding graph G is minimized
over the set of graphs corresponding to all feasible
sequences. We can state the algorithm as follows:
Algorithm BB:
Step l: Let K ■ max { & u }. Calculate q.. = K  d. for
ijeN
each operation ij .
Step 2: Obtain an initial feasible solution by applying
some heuristic to the problem. Set the upper bound UB to the
value of Cmax for this solution. Let S denote the set of
76
n3
6
u
H
09
Q
CO
U
M
h
o
js
U

CM
01
U
3
60
77
operations available for sequencing, i.e., those whose
predecessors have been sequenced. Let P be the partial
sequence of operations already sequenced.
Set S to be the set of operations without fixed
predecessors, P = {}. This corresponds to the root node of
the search tree.
Step 3: Branch by appending each member of S in turn
to the right of the partial sequence P associated with the
current node.
Step 4: For each new node generated at Step 2, perform
the following:
i) Calculate a lower bound LB as described below.
ii) If LB > UB, fathom this node and go to step 5.
Else, check if LB corresponds to a feasible solution. If so,
set UB = LB. Update S by adding to it the successors of the
last sequenced operation. Go to Step 5.
Step 5: Select for further expansion the open ( i.e.,
not fathomed or already expanded) node with the lowest
associated LB value. If no such node can be found, an
optimal solution has been obtained. Else, go to Step 3.
The lower bounds used for fathoming form one of the
most critical components of any branch and bound method. We
78
will present two lower bounds that have been developed for
1/prec , qj , SDST/Cmax .
Let us first consider viewing the q.. as a "teardown"
time necessary to bring the machine to a final state after
the completion of the last operation. Let us refer to this
modified problem as (API) . The makespan of this problem will
be given by
n1
ljeN 1=1
We have then the following propositions:
Proposition 5.1
The optimal makespan for (API) is a lower bound on the
makespan for 1/prec, qj, SDST/Cmax.
Proof:
Consider the graph G* corresponding to an optimal
sequence S* to 1/prec, q. }l SDST/Cmax. There are two cases to
consider:
i) The operation with maximum lateness in S* is the
last in the sequence. Then the longest path in G* is the
path  [1]  [2]  . . . [n1]  [n]  *. Note that by its
definition, an optimal solution to (API) will be the
shortest path from to * containing all n nodes
corresponding to operations. Hence, the path  [1]  [2] 
... [n1]  [n]  * in G must be the same as that
79
generated by the solution to (API) , otherwise it would not
be optimal. Thus the objective function values of
1/prec, qj, SDST/Cmax and (API) are equal.
ii) The operation with maximum lateness in S* is not
the last operation. Then, since the objective function value
corresponds to the length of a longest path in G*, the path
 [1]  [2]  . . . [n1]  [n]  * cannot be a longest
path in G . Since the optimal value of (API) corresponds to
the length of the shortest path of this form, it must be
less than the length of the longest path in G*, and hence
the optimal value of 1/prec, q. } , SDST/Cmax. Q.E.D.
Proposition 5.2
If the operation having maximum lateness in the
sequence obtained from (API) is the last operation in the
sequence, then the sequence is optimal to
1/prec , qj , SDST/Cmax .
Proof:
Construct the graph G corresponding to the sequence
obtained by solving (API) , numbering nodes according to
their position in the sequence. Since operation [n] has
maximum lateness, the longest path in G is the path  [1]
 [2]  ...  [n]  *, and the length of this path, Zp. + E
s [i H i+i] + q [n] , is equal to the optimal value of (API). Since
we know from Proposition 5.1 that the optimal value of (API)
80
is a lower bound on the optimal value of
l/preCjq^SDST/Cmax, this sequence is optimal to
l/pre^q^SDST/Cmax. Q.E.D.
Problem (API) can be formulated as a Travelling
Salesman Problem (TSP) as follows. Let the cities correspond
to the node set of G. Let the arc costs represent the setup
times Sjj jkl for nodes corresponding to operations, and q..
for arcs incident into node *. There are no arcs incident
into node except one from node * that has cost 0, which is
also the only arc incident out of that node. Thus we have
ensured that the tour starts and ends in city 0, with city *
the next to last city in the tour. The problem is to find
the minimum cost tour starting and ending at node that
visits all intermediate nodes exactly once.
Since the TSP is known to be NPhard, it is not
computationally feasible to use it to develop bounds at each
node of an implicit enumeration tree. Therefore it becomes
necessary to find a tight lower bound on the optimal value
of (API) which we could obtain with less computational
effort. Such a lower bound is provided by the assignment
relaxation to the TSP. This problem is solvable in
polynomial time, and Balas and Toth[12] have found in an
extensive study that this bound is a tight one for the TSP,
on average yielding an optimal value equal to 99.2% of the
optimal TSP value. It is important to note that the solution
81
generated by the assignment problem need not be feasible for
(API) , since it may contain subtours and violate precedence
constraints.
Since the optimal value of the assignment problem is a
lower bound on that of the TSP, then substituting the
optimal value of the assignment problem for that of the TSP
will still yield a lower bound on l/prec^. ,SDST/Cmax. Thus,
if we denote the optimal value of the assignment relaxation
of the TSP described above by A, then we have a lower bound
LB1 given by
LB1 = S p.. + A
ijeN
The lower bound LB1(P) for the partial seguence P at a
given node of the enumeration tree corresponding to a
partial seguence P is given by
LB1(P) = M(P) + T + A(N\P)
where M(P) denotes the makespan of the jobs in the partial
seguence, T the total processing time of jobs in N \ P, and
A(N\P) the assignment problem solved for the unseguenced
jobs.
A second lower bound, which will be referred to as LB2 ,
is obtained by relaxing the seguencedependent setup times
and seguencing operations in Earliest Due Date (EDD) order.
The bound LB2 is set egual to the maximum lateness obtained
from this seguence.
82
Dynamic programming algorithms for 1/prec.SDST/Lmax
In this subsection we shall examine dynamic programming
procedures for the 1/SDST/Lmax problem. We assume that there
are m lots of chips to be processed, and that lot i reguires
N(i) operations. Operations on the same lot are numbered in
order of their precedence order. The total number of
operations to be scheduled is n. Recall that we have a
chainlike precedence graph since operations on separate
lots are not linked by precedence constraints. This imposes
a fixed ordering on the operations on the same lot. Given
this ordering, we now give a dynamic programming procedure
similar to that of Monma and Potts [78] to merge the
operations on different lots together into an optimal
schedule. Define f [n(l) ,n(2) , . . . ,n(m) , t, i] to be the minimum
Lmax value for a partial schedule completed at time t
containing the first n(k) operations of lot k, k=l,...,m
where the last operation in the partial sequence comes from
lot i. Initially, set f[0,0,0,...,0]=0 and all other values
to infinity. The optimal Lmax value will be the smallest
value of the form
min { f [N(l) ,N(2) , . . . ,N(m) ,T,i] } where
l<i<m
m N(i) m
T < 2 £ p.. + s N(i)s
— . . *j" . l ' max
1=1 3=1 1=1
and s max denotes the maximum setup time value.
The function values can be computed using the following
recursive relation:
83
f[n(l) ,n(2) ,...,n(m) ,t,i] =
min{ max {td . . , f [n' (1) ,n» (2) , . . . ,n (m) , t • ,k] } }
l<k<m
where n'(j) = n(j) for j^i, n 1 (i)  n(i)l and
t'=to  s
The number of possible states in this dynamic program
is m(N+l) m T, where N = max,{N(i)}, and the value of each
state is calculated in 0(m) steps. Hence the computational
complexity of this procedure is 0(m 2 (N+l) m T) .
When setup and process times are large, the large
values of T will result in rapid growth of the state space
and thus of storage requirements. However, we observe that
the completion time t of any partial schedule will consist
of two components, the processing times of the operations in
the partial sequence and the setup times taking place
between operations. This enables us to take advantage of the
special structure of the semiconductor testing environment.
An important characteristic of the production equipment in
use is that there are a limited number, generally less than
ten, of distinct entries in the setup time matrix. This is
much less than n , the number of possible entries in the
setup matrix. Let the total number of distinct setup time
values s(k) be S.
Define f [n(l) ,n(2) , . . . , n(m) ,a 1 , o 2 , . . . , a s , i] to be the
minimum Lmax value for a partial schedule containing the
first n(k) operations of lot k, k=l, . . . ,m and o occurrences
84
of the j'th distinct setup time value s(j) , j=l,...,S where
the last operation to be processed belongs to lot i. We can
now calculate the completion time t of the partial sequence
from the relation
m n(i) S
t = 2 s p., + Sa k s(k)
i=l j=l k=l
Initially set f[0,0,...,0,0] = and all other values
to infinity. The optimal value will be the smallest value of
the form
min { f [N(l) ,... ,N(m) ,<?.,, a 2 , ... ,a s , i] } where E^a^n. The
l<i<m
recursive relation can now be written as
f[n(l),n(2),...,n(m),(j 1 ,o 2/ ...,a s ,i] 
min{ max {td n(1 , , f [n» (1) , . . . , n' (m) ,a\, . . . ,ff« 8 ,k] } }
l<k<m
where t is as calculated above, a ] = a' } if s (n , (k)#k)f(n<!)#n f
s(j) and fft J = CTj  1 if ■ (n , <k)fk)f(n(l)fi) = s(j).
The number of states in this dynamic program is at most
m(N+l) m n s , where N = max f {N(i)}, and the value of each state
is computed in 0(m) steps. Hence the complexity of this
procedure is (m 2 (N+l) m n s ) .
It is interesting to note that the complexity of these
procedures is polynomial in the number of operations but
exponential in the number of lots. Thus, when the number of
lots is fixed, l/prec,SDST/Lmax can be solved in polynomial
time. When the number of lots is small and the number of
85
operations on each lot is large, this procedure may provide
a practical alternative to branch and bound. However, as the
number of lots increases, the computational burden increases
rapidly.
Heuristic Procedures for 1/r . ,prec .a j , SDST/Cmax
In this subsection we will first examine the worstcase
performance of a certain class of onepass heuristics, list
scheduling procedures, for 1/rj ,prec, qj , SDST/Cmax. For the
sake of simplicity in this section we shall use only a
single subscript to represent operations, taking the lot
structure into account explicitly as precedence constraints.
We shall then examine the behavior of a member of this class
that has been extensively studied in the context of the
problem without setup times, the Extended Jackson's
Rule[20.85], for the special case of the problem where
release times and due dates are agreeable, i.e., r, < r
implies d f < d, .
We can define the family of listscheduling algorithms
as follows:
Algorithm LS:
Whenever the machine is free and there are one or more
available operations, select one of the available operations
and sequence it next.
86
An operation i is said to be available at time t if r ;
< t and all predecessors of operation i have already been
sequenced at time t.
Note that which of the available operations is to be
selected can be specified in different ways. Examples of
selection criteria resulting in different listscheduling
heuristics might be to select the operation with earliest
due date or shortest processing time.
Due to the presence of release times the schedule
obtained by Algorithm LS will consist of one or more blocks,
periods of time in which the machine is continually busy,
either in processing or in setup. Let C(LS) denote the
maximum completion time of the sequence obtained by
Algorithm LS. Let [k] denote the k'th operation in the
sequence, and [j] be the operation such that its completion
time is equal to C(LS) . Then
jl J
C(LS) = r m + S s [h][h+1] + S p [h] + q
h=il h=i
for some operation [i], before whose arrival the machine is
idle.
Proposition 5.3: Let C(LS) be the value of the schedule
obtained from LS for the l/r^pre^q^SDST/Cmax problem, and
C* the optimal value of l/rj,qj/Cmax, the problem without
setup times. Then C(LS) < 3C*, and this bound is tight.
87
Proof: As discussed above,
jl J
C(LS) = r [n + S s [h][h+1] + S p [h] + q
h=il h=i
By construction of the sequence, operation [i] is
available no later than operation [j], which means that
either r m < r^, or r [<3 > r t , 3 and i precedes j. However,
the latter case is impossible since if i precedes j then
they must be operations performed on the same lot, which
means that r [f] = r cj] . This contradicts the assumption that
r cn > r [j] • Tnus ' we conclude that r m < r ry] .
jl j
C(LS) < r cj] + 2 s [h][h+1] + 2 p [h] + q
h=il h=i
jl jl
'[j] + P[j] + 5[j] + 2 . S [h ][h+ 1] + s . P [M
h=il h=i
The first three terms clearly constitute a lower bound
on C . Each of the latter two terms is less than or equal to
the sum of the processinq times, which in turn is a lower
bound on C*. Thus, C(LS) < 3C*.
We now provide an example to show that this bound is
tight. Consider an instance without precedence constraints and
with the following parameters:
1
r i
Pi
5f
1
n
2
1
1
n
88
where s 12 = n, s 21 = 1. Let all other s,, values be equal to 0.
Algorithm LS will yield a sequence {1,2} with completion time
3n+l. However, the optimal sequence for the problem without
setup times is {2,1} with completion time n+2 . Thus, C(LS)/C*
approaches 3 as n becomes large. Q.E.D.
Remark: Proposition 5.3 is also true for the problem without
precedence constraints.
A particular member of the class of listscheduling
algorithms is the Extended Jackson's Rule studied by
Potts [85] and Carlier[20] . This algorithm can be stated as
follows:
Algorithm EJ:
Whenever the machine is free and there are one or more
available operations, sequence next the operation with
largest value of q. .
Let [k] denote the k'th operation in the sequence, and
[j] be the operation such that its completion time is equal
to C(EJ) . Then
jl J
C(EJ) = r m + Z s [h][h+1] + s p [h] + q
h=il h=i
for some operation [i], before whose arrival the machine is
idle.
89
It is clear from Proposition 5.3 that for the general
l/r jf prec f q i# SDST/Cmax problem, C(EJ) < 3C*, where C* is the
optimal value of l/zj ,qj/Cmax. However, for a special case
of the problem we have the following result:
Proposition 5.4: Suppose r s > r t implies d s > d t and thus q s <
q t and s fj < p. for all jobs i,j. Let c(EJ) be the value of
the sequence obtained by Algorithm EJ, and C* the optimal
value of l/r^qj/Cmax. Then C(EJ) < 2C* , and this bound is
tight.
Proof: By construction of the sequence, r m = min k {r k ),
ke{ [i] , . . . , [ j ] } , by the argument in the proof of Proposition
5.3. For any ke { [i] , . . . , [ j ] } , suppose q tj] > q [k] . It is
impossible for [j] and [k] to be operations on the same lot
since in that case we would have q cj] = q [k] . Hence [k] and
[j] are not operations on the same lot, i.e., they are not
linked by any precedence constraints. Since [k] is processed
earlier than [j], this means either q [k] > q tJ] or r [k] < r rj] ,
which by the assumption of agreeable arrival times and due
dates implies q [k] > q fj] . Both these cases contradict the
assumption that q [j; > q [k] . Hence, we conclude that q. ] =
min k {q k }.
It has been shown by earlier [20] that for any subset I
of the operations to be sequenced,
90
H(I) = minlr,} + 2 p. + minlq,. }
iel iel iel
is a lower bound on the optimal value of 1/rj ,qj/Cmax, the
problem without setups. Setting 1= { [i] , . . . , [ j ] } , we see
jl J
C(EJ) = r [n + 2 s [h][h+1] + 2 P[h] + q
h=il h=i
j jl
* r m + J Pm + q C j] + s S [ h] [h + 1]
h=i h=il
j j
* r m + s Pew + <3[j] + s Pi
h=i h=i
< 2C
To see that the bound is tight, consider the following
example:
1 r i Pi <3i
10 n 1
2 In
where s 12 = n and s 21 = 0. Algorithm EJ returns a sequence of
{2,1} with C(EJ) = 2n+2. The optimal solution without setups
is also {2,1} with C* = n+2 . Thus we see that C(EJ)/C* tends
to 2 as n becomes large. Q.E.D.
Corollary 5.1: Let L(EJ) denote the value of Lmax of the
sequence obtained by applying Algorithm EJ to the
corresponding 1/rj , qj , SDST/Cmax problem, and L* the optimal
value of 1/rj/ Lmax. Then L(EJ) < 2L* + d max , where d max =
max^d,}.
91
Proof: Note that for a given sequence, the values of Cmax
obtained in l/r^qj ,SDST/Cmax and Lmax obtained in
l/r^SDST/Lmax will differ by the constant K  max^d,}.
Similarly, for the problem without setup times, C* and the
optimal Lmax value L* will differ by K. Thus, L(EJ) + K <
2L + 2K. After simplification, the result follows since
K=d max Q.B.D.
In order to obtain further improvements over the
sequences generated by the above heuristics, the
neighborhood search procedure described in the next section
can be applied to the solutions obtained.
A Neighborhood Search Algorithm
The algorithm developed in this section is a
neighborhood search procedure that finds a local optimum. It
is based on insights obtained using the disjunctive graph
representation described previously in Chapters III and IV.
These insights enable us to ensure that only feasible
neighboring solutions are generated by the procedure at each
step.
The approach used here makes use of the following
result [53] :
Theorem: Let 0. , 0. j denote the location in the current
sequence of jobs i and j. The reversal of a disjunctive arc
92
between i and j cannot lead to a cycle if and only if the
two operations are adjacent in the sequence, i.e., either 0
= Oj + 1 or 0j = 0. + 1.
This tells us that feasibility is maintained by
reversing only disjunctive arcs between adjacent operations
in the sequence. Note that this corresponds to a pairwise
exchange between operations that are adjacent in the
sequence and have no precedence constraints. This renders
explicit consideration of the disjunctive graph
representation of the workcenter unnecessary.
The algorithm presented here uses this property as a
basis for a neighborhood search procedure. An initial
feasible solution is generated using a heuristic. The set of
pairwise exchanges between adjacent operations in this
sequence is examined, and those leading to an improvement of
Lmax are used to generate neighboring sequences. This
procedure is then applied in turn to all neighboring
sequences until no further exchanges leading to improvement
can be found. The neighboring sequences generated at each
stage are checked to ensure they have not been examined
already in order to prevent cycling. The final solution
returned by the algorithm is the best sequence encountered
during the search.
Since we have a single machine, and assuming n
operations, we have at most (n1) exchanges to consider at
any stage of the search.
93
In order to determine whether an exchange will lead to
a poorer solution, consider two adjacent operations i and j
in the current sequence. Let kp be the operation preceding
im in the current sequence, and lq the operation succeeding
jn. The Gantt chart is as follows:
Current schedule:
A
kp
im
jn
lq
B
im, jn
After exchange of im and jn:
A
kp
jn
im
lq
B
jn, im
Let C fB j n denote the earliest start time of operation lq
before the exchange, and C jn  m the earliest start time after
the exchange.
C • =
im,jn
fc A + Pkp + S kp,im + Pim + S im,jn + Pjn + S jn,lq
C jn,im = fc A + Pkp + S kp,jn + Pjn + S jn,im + Pim + S im,lq
D " C im,jn " C jn,im = ( S kp,im + S im,jn + S jn,lq)
( S kp,jn + S jn,iin + S im,lq)
Clearly, when D > 0, carrying out the exchange cannot
make Lmax worse, unless Lmax = L .
' im
Thus, an exchange cannot worsen the current solution
1) if D > 0, and
94
2) if the delay in the completion of im is less than
Lmax  L .
im
We can state the algorithm as follows:
Algorithm NBHD:
Step 1: Generate an initial feasible solution by
seguencing all lots in ascending due date order, with the
operations on each lot in order of precedence. Set UB to the
value of Lmax for this seguence. Record this seguence in
BEST.
Step 2: Examine each of the (n1) possible pairwise
exchanges in the current seguence. If the exchange meets any
of the following conditions, ignore it:
 It is infeasible due to precedence constraints
 It leads to a poorer solution
 It leads to a seguence that has already been
generated
If there are no exchanges leading to improvement, select
another seguence whose neighbors have not been examined and
repeat Step 2. If no such seguence exists, stop. Return
seguence in BEST as final solution, with objective function
value UB.
Step 3 : Generate the seguences corresponding to the
exchanges determined in Step 2 and record them. If any
seguence generated has a better objective function value
than UB, record that seguence in BEST and update UB. Select
95
a sequence whose neighbors have not been examined and go to
Step 2 .
Another way of looking at what this algorithm attempts
to do is to view it as trying to reduce the maximum lateness
by reducing the completion time of the latest lot. This is
equivalent to solving a travelling salesman problem
considering only the operations in the sequence up to the
latest operation. Viewed from this perspective, the
algorithm can then be described as a 3 exchange procedure
where we examine combinations of three arcs (with costs
s kp,im' s im,jn and s jn,iq) and tr Y to replace them with a shorter
threearc combination (with costs s. . , s ; „ , and s. . ) .
N kp,jn' jn,im im,lq'
Minimizing Number of Tardy Lots
In the context of semiconductor testing, the lateness
of the entire lot, not of any individual operation is of
interest. Thus for the rest of this section we will use the
notation SU, to denote number of tardy lots, as opposed to
individual operations. The lot structure is indicated by the
presence of precedence constraints between operations on the
same lot. Thus, l/preCjSDST/SU, will denote the problem of
minimizing the number of tardy lots in the presence of
precedence constraints between operations on the same lot
and sequencedependent setup times. 1/SDST/SU,. will be used
to represent the special case where there are no precedence
96
constraints, i.e., each lot requires only one operation.
Recall from Chapter III that even the 1/prec/SU,
problem is NPhard[72]. Hence we shall concentrate on
developing a heuristic procedure for l/prec,SDST/EU. and
examining its worstcase behavior for a special case.
A Heuristic Procedure for l/prec,SDST/SU .
In this section we shall present an algorithm similar
to that of Moore cited in Baker [3] for the 1/prec, SDST/EU,
problem. This algorithm is based on certain relations that
exist between the problems of minimizing maximum lateness
and that of minimizing the number of tardy jobs. We shall
use the term "Lmaxoptimal" to denote a sequence which
minimizes Lmax, and "ZU,. optimal" to denote a sequence which
minimizes the number of tardy lots.
Moore's algorithm[3] is based on the fact that any
optimal solution will consist of a set A of jobs that are on
time, whose sequence minimizes Lmax over that set, and
another set T of tardy jobs sequenced after A whose sequence
is immaterial. Once the algorithm has determined that there
does not exist a sequence with EU^O, it proceeds to
construct the set T of tardy jobs by assigning to T the job
in the sequence up to the first tardy job that has the
longest processing time. Moore proves that there is no way
the jobs in the sequence up to the first tardy job can be
sequenced so that none are tardy. The idea behind the
97
algorithm is that given one job has to be late, assign to T
the job that will allow those remaining to start as early as
possible.
For the rest of this section, we will assume that for
all operations i,j,k the setup times satisfy the triangle
inequality, i.e.,s jk < s fi + s jk . This ensures that removing
an operation from any sequence does not result in later
completion times for the remaining operations.
Proposition 5.5: If setup times satisfy the triangle
inequality, then an optimal sequence S will have the form
(A,T) , where lots A are on time and in an Lmaxoptimal
sequence and those in T are tardy.
Proof: Let T be the set of tardy lots. Resequence S so that
the sequence of operations for lots in A is maintained
relative to one another, and the operations on lots in T are
at the end of the sequence in any feasible order. In order
to do this it may be necessary to move an operation jeT out
from between two operations i,keA. However, since setup
times satisfy the triangle inequality, removing an operation
j from the sequence cannot result in increased completion
times for the remaining operations. Hence the operations in
A start no later than they did before resequencing, so no
more jobs are tardy now than were in S.
Since no lot in A is tardy, Lmax < for this
98
sequence. Hence resequencing the operations in A so as to
minimize Lmax cannot increase Lmax. Thus, the number of
tardy lots in the rearranged sequence is no greater than
that in S, and so we have obtained an optimal schedule of
the desired form. Q.E.D.
Given this result, if we had a means of deciding which
lot to assign to T given an Lmaxoptimal sequence, we could
construct an Su^optimal sequence in the same fashion as
Moore. However, this is not possible due to the presence of
sequencedependent setup times. In the presence of setups,
it is no longer the case, for example, that there does not
exist a sequence of the operations up to the first tardy
operation in an Lmaxoptimal sequence where none are late.
One can, however, prove the following:
Proposition 5.6: If in the sequence up to the first tardy
operation in an Lmaxoptimal sequence with Lmax > there
exist three consecutive operations i, j, k such that s r + p.
+ s jk ~ s ik  Lma x, then there exists an EUjoptimal sequence
in which only the lot containing operation j is tardy.
Proof: Since in the original sequence Lmax > 0, at least
one operation must be tardy in the SU, optimal sequence.
Suppose we delete the lot containing operation j from the
sequence and maintain the relative sequence of all other
99
operations. Then the completion time of all operations to
the right of j in the sequence will decrease by at least s ,
+ Pj + s Jk  s ik units of time, which is greater than or equal
to Lmax. Since there are no tardy operations to the left of
j this will result in Lmax < for the new sequence. If we
append the lot containing operation j to the end of the new
sequence, we have a sequence with 2Uj=l. Since at least one
lot has to be tardy, this sequence is EUjoptimal . Q.E.D.
Noting that we are concerned with minimizing the number
of tardy lots and not the number of tardy operations,
instead of assigning operations to T we will assign entire
lots. Hence the time saved by deleting a lot m from the
sequence is the sum of the savings obtained by dropping all
operations in that lot.
Based on these insights, we can construct a heuristic
closely paralleling Moore's Algorithm in operation. The
algorithm can be stated as follows:
Algorithm NTH:
Step 1: Obtain a solution to the Lmax problem. If this
Lmax value is nonpositive, go to Step 3.
Step 2: Examine the subsequence up to the first tardy
operation . Delete the lot that results in the largest time
savings. Go to Step 1.
Step 3: Construct the final sequence by taking the
100
Step 3 : Construct the final sequence by taking the
current given by Step 1 and appending to the end all deleted
lots with their operations in any feasible order.
In order to implement this algorithm, it is necessary
to have available an algorithm to solve the problem of
minimizing Lmax. In theory, the exact algorithms presented
earlier in this chapter could be used for this purpose.
However, use of a heuristic together with the neighborhood
search procedure presented in the previous section might
also yield good solutions with much less computational
burden.
WorstCase Analysis for l/SDST/SU 
We will now examine the worstcase behavior of the
algorithm for the case without precedence constraints, i.e.,
where each lot requires only one operation. Thus the terms
"lot" and "operation" are equivalent in the context of this
problem. We shall assume that the Lmax problem in Step 1 of
Algorithm NTH is solved optimally yielding an Lmax value of
L at the first iteration, and that s,.. < Pj for all i and j .
The latter corresponds to assuming that it takes no longer
to set up the test equipment for an operation than it does
to perform the operation, which is indeed the case in
practice. We have the following Lemmas:
101
Lemma 5.1: Let L(S) be the optimal Lmax value for a set S
and L(S') be the optimal Lmax value for the set S' obtained
by adding one lot, say lot k, to S. Then L(S') < max
{0,L(S)} + 3p max , where p max is the maximum processing time of
all lots in S ' .
Proof: Let R be the sequence corresponding to L(S) . Then the
sequence obtained by assigning lot k in the first position
followed by the other lots in the same sequence as in R is a
feasible sequence for S 1 . For all lots in R the lateness
will increase by at most 3p max , since s. } + p. + s jk  s k <
2 Pj + P k < 3p max for any i,j,k. Since the latest possible
completion time of lot k in the new sequence is 2p max , L k <
3p max . The result follows. Q.E.D.
Corollary 5.2: Let L be the optimal Lmax value obtained at
the first iteration. Then L/3p max is a lower bound on the
optimal number of tardy lots SU*.
Proof: If L < then there are no tardy lots and we are
done. For the other case, SU,* > 0. Suppose SU* is less than
L /3p max  Let SU,* = L/3p max  k, where k > 0. By Proposition 3,
the optimal sequence corresponding to SU,* contains two sets
A and T of early and tardy lots respectively. Repeatedly
applying Lemma 1 by placing the lots in T in front of the
lots in A we obtain a sequence S having maximum lateness
102
less than or equal to 3p max (L/3p max  k) < L, which
contradicts the optimal ity of L. Q.E.D.
Lemma 5.2: Let L be the optimal Lmax value obtained at the
first iteration. Then rL/P m j n l is an upper bound on the
number of tardy lots yielded by Algorithm NTH if setup times
satisfy the triangle inequality.
Proof: Since setup times satisfy the triangle inequality,
when a lot in the subsequence up to the first tardy lot is
dropped at least p min units of time will be saved. Thus
r L /P min l gives the maximum number of lots that would need to
be dropped to have Lmax < 0. Q.E.D.
Proposition 5.7: Let SU f be the number of tardy lots given
by the algorithm above where the Lmax problem in Step 1 is
solved optimally and SU* the optimal number of tardy lots.
Then we have
SU, < i3/JSU^ ,
where fi = P max /p min .
Proof: Noting that by the Lemmas above, L/3p max < ZU* < EIL
< rVP rain l we see that
ZU, < pL/P mi  n l = r 3/?L/3p max1 < rSjSSU^n Q.E.D.
103
Dynamic programming procedures for 1/prec.SDST/SU j
In this section we examine a dynamic programming
procedure for l/prec,SDST/SUj which again takes advantage of
the special chainlike structure of the precedence
constraints. Again assume that operations are indexed
according to their order in the lot. Again there are m lots,
each lot requiring N(i) operations, the total number of
operations to be scheduled is n and N ■ max, {N(i)}. Since
we are concerned with the number of tardy lots and not the
number of tardy operations, we shall assign a weight w =
to all operations except those which are the last operation
to be performed on a lot, which will be assigned a weight
equal to 1. Let f [n(l) , . . . , n(m) , t, i] be the minimum weighted
number of tardy operations in a partial sequence containing
the first n(k) operations of lot k where the last operation
in the partial sequence belongs to lot i and is completed at
time t. Initially, set f[0,...0,0] = and all other values
to infinity. The optimal value will be the smallest value of
the form
min { f [N(l) , . . . ,N(m) ,T,i] } where
l<i<m
m N(i) m
T < Z Z Pjl . + S N(i)s max
1=1 n=l 1=1
The recursive relation can then be defined as follows:
104
f [n(l) , . . .,n(m) ,t,i] =
min { f [n 1 (1) , . . . ,n" (m) ,t' ,k] ), if t < d,
Kk<m
min { f[n' (1) , . . . ,n« (m) ,f ,k] + w n(i) , }, if t > d
Kk<m
where n'(j) = n(j) for j^i, n 1 (i) = n(i)l and
=t ~Pn(i),i " S (n'(k),k),(n(i),i)
The number of possible states in this dynamic program
is m(N+l) T, and each state is evaluated in 0(m) steps.
Thus the computational complexity of this dynamic program is
0(m 2 (N+l) mf1 T) .
Again, as was the case for 1/prec, SDST/Lmax, the
presence of the T term leads to rapid growth in the number
of states and thus storage requirements as the process and
setup times get large. As was the case for the
1/prec, SDST/Lmax problem, we can exploit the special
structure of the setup time matrix to alleviate this
problem. Again assume that we have S distinct setup time
values s(k), k=l,...,S. Then we can define
f [n(l) ,n(2) , . . . ,n(m) , ct 1 ,ct 2 , . . . ,a s , i] to be the minimum
weighted number of tardy operations for a partial schedule
containing the first n(k) operations of lot k, k=l,...,m and
CTj occurrences of the j'th distinct setup time value s(j) ,
j=l,...,S where the last operation to be processed belongs
to lot i. Initially set f[0,0,... ,0,0,0] = and all other
105
values to infinity. The optimal value will be the smallest
value of the form
min { f [N(l) , . . . ,N(m) ,a 1f a 2 , . . . ,CT s ,i] } where T,.o^=n. The
l<k<m
recursive relation can now be written as
f [n(l) ,n(2) , . . . ,n(m) ,o u o 2 , . . . ,a s ,i] 
min { f[n , (l),...,n , (m),a , 1 ,ff , 2 ,...,a , s ,k] }, t< d 1
l<k<m
min { f[n« (1) ,...,n' (m) ,o' u o' 2 , . . . ,a' s ,k] + w n<i) , }, t>d,
l<k<m
where a j  fl'j if s (n , <k>fk>f(n(f)J) f s(j) and a' j = a ]  1 if
S (n'(k),k),(n(i),i) = S (D) •
The number of states in this dynamic program is
m(N+l) m n s , and each state is evaluated in 0(m) steps. Hence
the computational complexity of this procedure is
0(m 2 (N+l) m n s ) .
Again, as was the case for the 1/prec, SDST/Lmax
problem, the complexity of the procedure is polynomial in
the number of operations and exponential in the number of
lots. It is also interesting to note that the dynamic
programming procedures for both 1/prec, SDST/Lmax and
1/prec, SDST/EU have the same complexity. This is unusual
since SU 1 is generally a much more difficult performance
measure to minimize than Lmax.
106
Summary
In this chapter we have presented formulations,
complexity classifications and solution procedures for a
family of problems based on the modelling of workcenters
consisting of a single tester. These problems turn out to be
NPhard, but also in a number of instances to have
interesting theoretical properties.
For the problem of minimizing maximum lateness when all
jobs are available simultaneously, we have presented a
branch and bound algorithm and dynamic programming
procedures. The branch and bound algorithm has been shown to
be capable of solving problems with up to 15 operations in
reasonable computer time. The dynamic programming procedures
are polynomial in the number of operations, but exponential
in the number of lots. Thus when the number of lots is
small, these procedures may provide a good alternative to
the branch and bound algorithm.
Heuristics have been developed for the problems of
minimizing Lmax with nonsimultaneous job arrivals and
minimizing number of tardy lots when all jobs are available
simultaneously. For the former problem, tight absolute error
bounds have been developed and a neighborhood search
procedure outlined to further improve the solutions obtained
from the heuristics. For the latter problem, a data
dependent worstcase bound has been developed for a special
case.
107
Dynamic programming procedures have also been developed
for the problem of minimizing number of tardy lots. These
procedures are similar to those developed to minimize Lmax
and have the same complexity, which is unusual since number
of tardy jobs is generally a more difficult performance
measure to optimize.
CHAPTER VI
BATCH PROCESSING MACHINES
Introduction
In this chapter we examine the problem of scheduling
batch processing machines. A batch processing machine is
defined to be a machine where a number of jobs can be
processed simultaneously as a batch. The processing time of
a batch is egual to the longest processing time among all
jobs in the batch. Once processing is begun on a batch, no
job can be removed from the machine until the processing of
the batch is complete. By contrast, we shall refer to a
machine capable of processing only one job at a time as a
unitcapacity machine.
These problems are motivated by burnin operations in
the semiconductor industry, where lots of chips are placed
in ovens and subjected to thermal stresses for an expended
period of time in order to bring out latent defects leading
to infant mortality before the product goes to the customer.
In order to be processed on the batch processor,
integrated circuits are loaded onto boards, which are then
loaded into the ovens. A certain type of integrated circuit
108
109
may require a product specific board, or a particular oven.
Each board can hold a certain number of chips, and each oven
a certain number of boards. Thus the problem of scheduling
this kind of operation is complicated by the fact that oven
and board availabilities have to be taken into account to
produce a feasible schedule. The processing times in burnin
operations are generally extremely long compared to other
testing operations (e.g., 150 hours as opposed to 45 hours
in testing) . Thus the efficient scheduling of these
operations is of great concern to management. Management
objectives are primarily concerned with customer service and
resource utilization. Hence we examine the performance
measures of maximum tardiness (Tmax) , number of tardy jobs
(SU,.), total flow time (SF f ) and makespan (Cmax) .
In this chapter we present efficient algorithms for
minimizing a number of different performance measures on a
single batch processing machine. We also provide an
algorithm to minimize total flowtime on parallel identical
batch processing machines, and extend the Longest Processing
Time (LPT) heuristic developed to minimize makespan on
parallel identical processors to the case of parallel batch
processing machines.
Assumptions and Notation
We shall assume in this chapter that all jobs are of
the same size, i.e., require the same oven capacity, and that
110
the oven can process at most B jobs at the same time. In
practice this is achieved by splitting large lots into lots
compatible with available capacity before processing on the
batch processor. Once processing of a batch is initiated, it
cannot be interrupted and other jobs cannot be introduced
until processing is completed. With each job i we shall
associate a processing time p i ,a due date d and a release
time r which corresponds to the time the job becomes
available for processing on the batch machine.
In order to be able to refer to the problems under
study in a concise manner, we shall again use the notation
of Lageweg et al. [59,60], extended to include batch
processing machines. The use of the letter B in the second
field of the notation will represent the fact that the
machine or machines being scheduled are batch processing
machines. Some examples of the extended notation are as
follows:
1/B/Cmax: minimize makespan on a single batch
processing machine.
l/r.,Pj=p f B/SUj : minimize number of tardy jobs on a
single batch processing machine where all jobs have egual
processing times and job i is available at time r ■ .
We will also need the following definitions:
Definition 6.1: We say a seguence is in batchEDD order if
for all batches P,Q in the seguence where batch P is
Ill
processed before batch Q, there is no pair of jobs i,j such
that ieP, jeQ and d, > dj.
Definition 6.2: We say a sequence is in batchLPT order
(batchSPT order) if for all batches P,Q in the sequence
where batch P is processed before batch Q there is no pair
of jobs i,j such that ieP, jeQ and p } < p } (p. > Pj) .
These definitions will enable us to define properties
of optimal solutions that can be used to derive optimal
algorithms for a number of batch scheduling problems.
Minimizing Total Flowtime
The first problem we examine is that of minimizing
total flowtime on a single batch processing machine when all
jobs are available simultaneously. We shall denote this
problem by 1/B/2F . This problem cannot be viewed as a
special case of the serial system consisting of a unit
capacity machine and a batch processing machine examined by
Ahmadi et al.[2] due to the fact that the processing time of
a batch is not independent of the jobs composing it. In this
section we shall reindex the jobs in ascending order of
processing times . We have the following result:
Lemma 6.1: In the 1/B/SF, problem, there exists an optimal
solution where jobs are in batchSPT order.
112
Proof: Consider an optimal sequence and suppose we have a
batch P processed immediately before Q such that max
{p k keP} > max {pJkeQ}. Then reversing the sequence of the
two batches will improve SF . Suppose after performing this
exchange for all batches in the sequence we have two jobs
i,j such that p. < p } and jeR, ies where batch R is
processed before batch S. Suppose we move i to R, j to S.
Since p. < Pj. , the completion time of R is not increased.
Since we have rearranged the batches in ascending order of
processing time, we know that max {pJkeS} > max {pJkeR},
it is impossible for the completion time of batch S to be
delayed by replacing a job in S with a job from R. Repeating
this procedure we will obtain a solution of the desired
form. Q.E.D.
Based on this property we can develop a dynamic
programming procedure to solve 1/B/SF, . As a result of Lemma
6.1, we can view this as a consecutive partition problem.
Recall also that we have reindexed the jobs in order of
increasing processing time. Hence the processing time of a
batch will be equal to the processing time of the highest
indexed job it contains. Due to the fact that once
processing on a batch starts, no job can be removed until
the batch is completely processed, the completion time of
all jobs in a batch is equal to the completion time of the
batch. Given these insights, we can state the algorithm:
113
Algorithm DPI:
Let f(j) denote the minimum total flow time for a
schedule containing jobs l,...,j. Let F(j) denote the
finishing time of the sequence corresponding to f(j).
Initially, f(0)=0, F(0)=0 and f(i)= °o , F(i)=°o for i<0. Then
f(j) = min { f(jk) + k(F(jk) + p.) }
l<k<B
F(j) = F(jq) + p.
where q = {if(j) = f(ji) + i(F(ji) + p } ) } . The optimal
value will be given by f (n) .
The number of states in this dynamic program is n, and
each state is evaluated in 0(B) steps. Hence the complexity
of this procedure 0(nB) .
Minimizing Maximum Tardiness
In this section we present efficient algorithms to
minimize Tmax on a batch processing machine. We note that
when B=l, the problem is equivalent to the unitcapacity
machine problem, l/r./Tmax, which is NPhard in the strong
sense [59,60]. Hence the general 1/r ,B/Tmax problem is NP
hard. We will first consider the case where all processing
times are equal, which we will denote by 1/r, ,p,=p, B/Tmax.
In addition we assume that release times and due dates are
agreeable, i.e., r. < rj implies d < d,. The problem of
determining whether a feasible schedule, i.e., a schedule
114
with Tmax=0 exists for this problem was addressed by Ikura
and Gimple[56]. We present an alternative dynamic
programming algorithm to determine whether or not a feasible
schedule exists and then use it to develop a polynomial time
procedure for minimizing Tmax. We apply a similar approach
to the problem where all jobs are available simultaneously
and processing times and due dates are agreeable.
The approach we use is similar to that of Simons [37]
for the l/rjjPj^p/Lmax problem. We develop an O(nB) dynamic
programming algorithm that will find the feasible seguence
with minimum completion time if one exists. We then use a
bisection search over possible values of Tmax to obtain the
optimal Tmax.
Throughout this section we assume that jobs are indexed
in increasing order of due dates . We have the following
result:
Lemma 6.2: In the 1/r ,p.=p,B/Tmax problem with agreeable
due dates and release times, if a solution with Tmax=0,
exists, then there exists a solution with Tmax=0 where the
jobs are in batchEDD order.
Proof: Suppose there exists a feasible solution such that we
have two batches P and Q where P is processed before Q and
there are two jobs i and j such that jeP, ieQ and d < d.
Let r(P) = max{r k keP} and r(Q) = max{r k keQ}. Then the
115
completion times of these batches are C(P) = max{r (P) , C (P
!) } + P/ C(Q) = max{r(Q) ,C(Q1) }+p respectively, where C(P
1) and C(Ql) denote the completion times of the batches
preceding P and Q. Suppose we exchange i and j by moving j
to Q and i to P. Since d,. < dj , r f < r } < r(P) and r, < r, <
r(Q) and the start of processing on neither batch is delayed
as a result of the exchange. Since all jobs have the same
processing times, the completion times of the batches after
the exchange C'(P) and C'(Q) will not be greater than C(P)
and C(Q) respectively. Since the original schedule was
feasible, C(Q) < d, and C(P) < dj which implies C'(Q) < C(Q)
< d, < dj. Since P is processed before Q, and C'(P) < C(P) <
C(Q) < d, . Hence the new schedule is also feasible. Q.E.D.
We can now state the dynamic programming algorithm DP2
which will find a feasible schedule with minimum makespan if
a feasible schedule exists as follows.
Algorithm DP2 :
Let f(j) denote the minimum finishing time for jobs l,...,j.
Then
f(0) = 0, f(j) = oo for j < 0.
f(j) = min{fj(j) IjB+I < i < j},
where
116
max{f (i1) ,rj}+p, if max{ f (i1) , r^+p < d i
f i(j) " \
oo , otherwise
denotes the completion time of jobs 1 through j when jobs
i,i+l,...,j are processed in the last batch. If f(j) > d, ,
then it is impossible to schedule jobs l,...,j so that none
are tardy, so we set f(j) = °°.
In order to justify algorithm DP2 , note that each batch
will contain no more than B consecutively indexed jobs due
to the property proven in Lemma 6 . 2 above . Thus the problem
becomes a consecutive partition problem. In order to be
feasible, we must have max{ f (i1) , r, }+p < d k , for all i<k<j
in order for the due dates to be met and since jobs are
indexed in ascending order of due dates, this is implied by
max{f (i1) ,rj}+p < d . We also reguire jB+1 < i < j since
the maximum number of jobs that can be processed in a batch
is B < n.
There are n states in this dynamic program, and each
state is evaluated in 0(B) operations. Hence the time
complexity of Algorithm DP2 is 0(nB).
Given that we have the above algorithm for determining
whether or not a feasible solution exists, we can use the
following approach to minimize Tmax:
117
Algorithm TMAX1:
Step It Apply algorithm DP2 to the problem. If a feasible
solution is found, stop. Tmax =0. If an infeasibility is
encountered, i.e., f(j) = » for some j, then construct any
sequence. Set UB to be the value of Tmax for this sequence,
and LB = 0.
Step 2: If UBLB < e, stop. Otherwise, augment all the due
dates by the quantity (UB+LB)/2 and apply algorithm DP2 . If
a feasible solution is found, set UB=(UB+LB)/2 and repeat
Step 2. If an infeasibility is encountered, set LB=(UB+LB)/2
and repeat Step 2 .
The complexity of this algorithm will be 0(nBlog 2 D),
where D denotes the width of the search interval of the
bisection search. Clearly the lower limit of this interval
is zero, corresponding to a feasible schedule. The upper
limit of the interval can be determined from any sequence as
we did in Alqorithm TMAX1 or from the following result:
Lemma 6.3: In the l/rjjp^p^/Tmax problem with agreeable
release times and due dates, (nl)p is an upper bound on the
value of Tmax.
Proof: Since we have n jobs to be scheduled, we can have no
more than n batches in any schedule. When a given job j
118
arrives, it can thus have at most (n1) jobs that have
arrived before it. Thus it will wait at most (nl)p time
units before being processed. Since by assumption r,+p<d,
if dj is augmented by (nl)p, job j will always be on time
with respect to this new augmented due date. Q.E.D.
Corollary 6.1: The time complexity of Algorithm TMAX1 is
0(nBlog 2 [(nl)p]) .
Proof: Follows directly from Lemma 6.3. Q.E.D.
We now address the problem of minimizing Tmax on a
batch machine where all jobs are available simultaneously,
process times are arbitrary but process times and due dates
are agreeable, i.e., p < pj implies d < dj . This assumption
is a generalization of several endogeneous due date
assignment rules discussed by Cheng and Gupta[25]. An
endogeneous due date assignment rule is one where the
scheduler assigns a due date to each job as it arrives on
the basis of job characteristics, shop status and estimated
flow time through the shop. This scenario is guite realistic
in the semiconductor manufacturing context, since due dates
for orders are set through a process of negotiation between
the customer and the manufacturing facility. Examples of
these types of duedate assignment rules are
119
TWK: d i = r + kp,.
SLK: d 1 = r { + p. + k
where k is some constant. It is easy to see that if all jobs
are available simultaneously our assumption on the
agreeability of process times and due dates covers both
these cases.
However, before considering the problem with agreeable
process times and due dates, we have the following result
for the special case where all jobs have equal processing
times. Recall that we index the jobs in order of ascending
due dates.
Proposition 6.1: The l/p^p^/Tmax problem is optimally
solved by the following algorithm:
Successively group the B jobs with smallest indices
into batches. Note that the batch containing the jobs with
highest indices may be partially empty.
Proof: By Lemma 6.2 we know that in an optimal solution the
jobs must be in batchEDD order. Because processing times
are equal for all jobs, it is easy to see that all batches
should be full except possibly the last batch to be
processed. Q.E.D.
As was the case for the 1/r. ,p,=p, B/Tmax problem with
agreeable release times and due dates, we can show that the
120
optimal solution to 1/B/Tmax with agreeable process times
and due dates will possess the property that jobs will be in
batchEDD order.
Lemma 6.4: In the 1/B/Tmax problem with p. < p implying d <
dj, if a feasible solution exists there exists a feasible
solution with the jobs in batchEDD order.
Proof: Similar to that of Lemma 6.2. Q.E.D.
We can now present a dynamic programming algorithm
similar to Algorithm DP2 to determine whether or not a
feasible solution exists.
Algorithm DP3:
Let f(j) denote the minimum completion time of jobs
l,...,j if they can be feasibly scheduled, and infinity
otherwise. Then
f(0) ■ 0, f(j) = oo for j < 0.
f(j) = min{f J (j) IjB+I < i < j}
where
f(il)+ Pj , if f(il)+ Pj < d,
f,(j)  <
oo, otherwise
denotes the completion time of jobs 1 through j when jobs
i,i+i,...,j are processed in the last batch.
121
The justification of this algorithm is similar to that
of Algorithm DP2 .
Algorithm DP3 can now be integrated into a procedure
similar to Algorithm TMAX1 to minimize Tmax. The only
difference from Algorithm TMAX1 is that instead of Algorithm
DP2, Algorithm DP3 is used in Step 1 to determine whether or
not a feasible schedule exists. This procedure we shall
refer to as Algorithm TMAX 2 .
As was the case for Algorithm DP2 , the complexity of
Algorithm DP3 is O(nB) . An upper bound on Tmax is provided
b Y n P maX ' where p max = max, {p,}. Thus the time complexity of
Algorithm TMAX2 is 0[nBlog 2 (np max ) ] .
Minimizing Number of Tardy Jobs
In this section we examine the problem of minimizing
the number of tardy jobs. The general problem with release
times, l/rj,B/SU,, is NPhard even for B=l [59,60], although
the case with agreeable release times and due dates has been
solved by Kise et al.[58]. We would conjecture that the
problem where all jobs are available simultaneously,
1/B/SU,, is also NPhard. We provide a polynomialtime
dynamic programming algorithm for the l/r=,p } =p,B/EU, problem
with agreeable release times and due dates and the 1/B/SU
problem with agreeable process times and due dates.
122
Let us first consider the l/r^p^p^B/EU, problem with
agreeable release times and due dates. For the rest of this
section we shall assume that all jobs are indexed in order
of ascending due dates . We have the following result:
Lemma 6.5: In the l/r^p^p^/ZU, problem with agreeable
release times and due dates, there exists an optimal
solution which has the form (A,T) where A is a set of
batches containing the jobs completed on time and T is the
set of tardy jobs in any feasible sequence. Also the jobs
completed on time will be in batchEDD order.
Proof: Let S be an optimal schedule, A the set of jobs
completed ontime and T the set of tardy jobs. Rearranging S
so that the jobs in T are completed after all jobs in A
while maintaining the relative order and batching of the
jobs in A cannot increase the completion time of any job in
A. Since all jobs in A are on time, Tmax=0 for the set of
jobs in A. Hence by Lemma 6.2, there exists a sequence for A
such that all jobs in A are on time and in batchEDD order.
Q.E.D.
Lemma 6.6: In the l/r J ,p,=p / B/SU 1 . problem with agreeable
release times and due dates, there exists an optimal
solution where the batches that finish ontime, i.e.,
contain no tardy jobs, contain only consecutive jobs. In
123
other words, if a batch B which completes on time contains
k<B jobs, and i is the smallest indexed job in B, then B
contains the jobs i, i+l, . . . , i+k1.
Proof: Since the set of ontime jobs can be arranged in
batchEDD order by Lemma 6.5, we only need to show that in
any nontardy batch B it is impossible that B contains i and
i+2 and not i+l. If such a situation were to occur, i+l must
be a tardy job since if it were nontardy it would violate
the batchEDD ordering.
By the assumption of agreeable release times and due
dates, r j+1 < r j+2 , which implies job i+l is available when
batch B starts processing and that d j+1 < d 1+2 . Then we can
replace job i+2 in batch B with job i+l. Since i+2 is on
time and d i+1 < d j+2 , i+l will be on time, i+2 will now be
tardy and we have no extra tardy jobs. Q.E.D.
Based on this property we can develop a dynamic
programming procedure for this problem.
Algorithm DP4:
Let f(i,j) denote the minimum finishing time of the on
time batches in a partial sequence considering jobs l,...,j
where i jobs are completed on time. Then
f(0,i)=0 for 1 < i < n, f(i,j) = °o for all i,j > 0.
f(i,j) = min { min {f k (i,j) l<k<B } , f(i,jl) }
124
where
max{f (ik, jk) ,rj}+p,
if max{f (ik,jk) f rj}+p} <dj. k+1
f k (ifj) 
oo, otherwise
f k (i,j) denotes the finishing time of the ontime jobs in a
partial sequence considering jobs l,...,j when i jobs are on
time and k jobs are processed in the last batch. The optimal
number of tardy jobs will be max (if(i,n) < oo, 1 < i < n} .
This is due to the fact that the last nontardy batch must
contain consecutive jobs by Lemma 6.6.
The number of states in this dynamic program is n 2 , and
each state is evaluated in 0(B) operations. Hence the
complexity of DP4 is 0(n 2 B) .
The extension of DP4 to the 1/B/SU,. problem with p. <
Pj implying d < dj is straightforward. It is easy to see
that Lemmas 6.5 and 6.6 hold for this problem as well. Thus
in order to extend DP4 to this problem, we only need to
redefine f k (i,j) as follows:
f(ik,jk)+ Pj , if f(ik,jk)+ Pj < dj.. k+1
f k (i,j) = <
oo, otherwise
Hence we see that it is possible to solve this problem as
well in 0(n 2 B) time.
125
Parallel Batch Machines
In this section we shall examine various parallel batch
machine scheduling problems. Throughout this section we
shall assume that we are given m parallel identical batch
processing machines. Other notation and definitions will
remain the same as for the single machine problems.
We will first consider the problem of minimizing total
flowtime on parallel batch machines, P/B/EF,. . For B=l the
problem is solvable in polynomial time using the Shortest
Processing Time (SPT) algorithm[3] . Assume that jobs are
indexed in order of nondecreasing processing times. We can
prove the following:
Proposition 6.2: In an optimal solution to the P/B/SF,
problem, batches on the same machine will be in batchSPT
order.
Proof: Similar to the proof of Lemma 6.1 for the single
machine case. Q.E.D.
Given this property, we can develop a dynamic
programming algorithm to solve P/B/SF, as follows:
Algorithm DP5:
Let f(j) denote the minimum total flow time in a
partial schedule containing jobs l,...,j. Let F(i,j),
126
i=l,...,m denote the completion time of the jobs scheduled
on machine i in the sequence corresponding to f(j).
f(0)=0, F(i,0) = for i=l,...,m
f(j) = °°/ F(i,j) =°° for all j<0, l<i<m
f(j) = min { f(jk) + k min { F(i,jk)+p. }}
Kk<B Ki<m
F(i,j) = <
F(i,jq)+Pj, if f(j) = f(jq) + q {F(i, jq)+Pj}
for some q
F(i,j1) , otherwise.
The optimal value is given by f (n) . There are n states
in this dynamic program, and each state is evaluated in
O(mB) steps. Thus the complexity of this procedure is
O(nmB) .
We now consider the problem of minimizing makespan on
parallel batch machines, P/B/Cmax. This problem is NPhard
in the strong sense even for B=l [59,60], but considerable
research has been devoted to analyzing approximation
algorithms. One of the most successful in practice is the
LPT algorithm, which has been shown to have a worstcase
error bound of 4/3 [48]. We have the following result:
Proposition 6.3: Suppose we reindex the jobs in descending
order of processing times. Then there exists an optimal
solution to the P/B/Cmax problem such that all batches will
contain consecutive jobs. Also all batches except possibly
the one containing the highest indexed job will be full.
127
Proof: Consider two batches P and Q, perhaps on different
machines. Suppose there are two jobs i and j such that p. <
Pj and ieP, jeQ, and there exists a job keP such that p k >
Pj. Suppose we move i to Q and j to P. Since p k > p., the
completion time of P is not worsened, while since p. < p.
the completion time of Q is not worsened. Repeating this
process, we see that all batches in an optimal solution will
contain consecutive jobs. Note that all batches except
possibly one must be full since if a batch is partially
full, its completion time is not worsened by filling it with
jobs having smaller processing times than the largest job in
the batch. This implies that only the batch containing the
job with the smallest processing time or largest index can
be partially empty. Q.E.D.
Due to the fact that the processing time of a batch is
equal to the processing time of the job in the batch having
the longest processing time, it is possible to view all the
jobs in the same batch as a single aggregate job, with
process time equal to the processing time of the batch,
i.e., the longest job. The proposition above allows us to
determine the batch structure of an optimal solution a
priori . Thus the P/B/Cmax problem can be viewed as an
equivalent unitcapacity machine P//Cmax problem. We can use
the LPT heuristic developed for P//Cmax to obtain solutions
to P/B/Cmax. We can state the algorithm as follows:
128
Algorithm ELPT:
1) Rank jobs in decreasing order of processing times.
Batch the jobs by successively placing the B jobs with
longest processing times into the same batch. Let p(B k ) be
the processing time of batch B k .
2) Order the batches in nonincreasing order of p(B k )
and assign them to the machines as they become free.
Note that the first step of the algorithm batches the
jobs according to the result of Proposition 6.3, and that
the second step applies the LPT heuristic to the P//Cmax
problem obtained by aggregating the jobs in the batches. We
have the following result:
Proposition 6.4: Let C* denote the optimal solution to the
P/B/Cmax problem, and C(ELPT) the makespan value obtained by
Algorithm ELPT. Then C(ELPT) < (4/3  l/3m)C*.
Proof: We know from Proposition 6.3 that an optimal solution
to P/B/Cmax must have the batch structure constructed by the
algorithm. Given the batch structure, it can be seen that
P/B/Cmax is eguivalent to the P//Cmax problem obtained by
considering batches as jobs with processing times egual to
that of the longest job in the batch, and that the two
problems will have the same optimal makespan C*. In the
second step of the algorithm we are applying the LPT
129
heuristic to the P//Cmax problem, and we know that for this
problem this heuristic will yield a makespan C(LPT) < (4/3
 l/3m)C*. Q.E.D.
Summary
In this chapter the area of scheduling batch processing
machines has been extensively explored. The motivation for
this work was the need to develop procedures for solving
subproblems related to burnin operations. In terms of
complexity classification the results can be summarized as
follows. With the exception of the first problem, others
have been classifies for the first time by this research.
Problem
Classification
1/B/Cmax
P
1/B/EF,
P
1/B/Tmax
Open
1/B/Tmax, agr.p i
,d, P
VPi = Pf B/Tmax
P
1/rj , B/Tmax
NPHard
i/rj f p,«p f B/Tmax
Open
l/r j, p,=p, B/Tmax
agr. r, d.
P
l/r^B/SU,.
NPhard
l/r^p^p^/SU.
Open
Reference
Bartholdi
Section 1
Section 2
Section 2
Section 2
130
Problem
l/r^p.^B/SU,
agr. r,., d,
1/B/SU,
1/B/ZU.
agr.p f ,d f
P/B/SPj
P/B/Cmax
Classification
P
Open
P
P
NPhard
Reference
Section 3
Section 3
Section 4
Of the open problems, we would conjecture that 1/B/Tmax
and 1/B/SU, are NPhard. We have also shown that certain
results pertaining to the worstcase performance of
heuristics developed for parallel identical machines can be
extended to the case of parallel batch processing machines.
CHAPTER VII
PROTOTYPE IMPLEMENTATION OF APPROXIMATION METHODOLOGY
Introduction
In Chapter IV the problem of scheduling a semiconductor
test facility was formulated as a job shop scheduling
problem and an approximation methodology for its solution
outlined. This methodology proceeds by dividing the job shop
into a number of workcenters and scheduling these
individually, using a disjunctive graph to capture
interactions between the workcenters. The implementation of
such a methodology in any real world environment requires
efficient methods of obtaining good schedules for the
individual workcenters. In Chapters V and VI various
solution procedures for workcenters consisting of a single
machine with sequencedependent setup times and single and
multiple batch processing machines were developed and
analyzed. In this chapter we illustrate the integration of
one of the algorithms analyzed in Chapter V into a prototype
implementation of the approximation methodology. In order to
do this we first describe the hypothetical environment in
which the methodology is to be implemented, and then present
131
132
the details of the prototype implementation of the
approximation methodology. In a final section we present the
results of preliminary computational experimentation with
the approximation methodology. Since it is prohibitively
timeconsuming to obtain optimal solutions for any but the
smallest problems, we compare the performance of the
approximation methodology with a dispatching rule similar to
those currently in use in industry. We show that the
approximation methodology obtains solutions of comparable
quality to those obtained by the dispatching rule, and
discuss ways to further improve its performance.
Implementation Environment
For a prototype implementation of the methodology we
consider a subset of a testing facility consisting of a
number of testing workcenters and a brand workcenter. This
configuration is very similar to the digital test facility
of a large semiconductor manufacturer extensively studied by
the author. While in the real facility there may be more
than one machine of a given type, we shall assume that all
such groups of machines can be approximately modelled as
single machine workcenters. The test workcenters, as
described in Chapter III, have sequencedependent setup
times while the brand workcenter does not. Product flows
from the testers to the brand workcenter and then out of the
facility. Each product is tested on a specific tester. Thus
133
there is no flow of work between the different test
workcenters. The product flows are assumed to be similar for
all products and to consist of the operations described
below:
1) Initial room temperature testing
2) Low temperature testing
3) High temperature testing
4) Branding
We shall adopt maximum lateness (Lmax) as the performance
measure to be minimized in this environment. An example of
such an environment consisting of four test workcenters and
a brand workcenter is shown in Figure 7.1.
Implementation of Approximation Methodology
From the point of view of implementing the
approximation methodology, we first note that there are two
different types of workcenter:
i) The test workcenters, consisting of a single machine
with seguencedependent setup times. The multiple operations
on each lot are represented by precedence constraints. This
scheduling problem, 1/rj ,prec, SDST/Lmax, is NPhard (see
Chapter V) .
ii) The brand workcenter, where there are no precedence
constraints or setup times. The problem of scheduling this
workcenter, 1/rj/Lmax, is again NPhard [ 59 , 60] .
134
TJ
0)
w
C
o
CO
o
c
o
o
fc.
%m
l_
1_
CD
CD
CD
CD
■**
■♦
•*»
w
CO
CO
CO
.^
CD
CD
CD
1
H
h
H
11
u
fa
M
a
•H
■ui
H
a,
S
3
M
•H
fa
1 *
C Q_
135
The critical components of the approximation
methodology are the algorithms used to determine the
critical workcenter and to sequence it. We would want these
algorithms to be fast from a computational standpoint and to
yield good solutions, preferably optimal ones. However,
since it is difficult to obtain optimal schedules for the
testing workcenters for any but the smallest problems, we
use the Extended Jackson Rule for both determining
criticality and for sequencing the critical workcenter. The
motivation behind this decision is the following:
 Analytically proven worstcase bounds for this
algorithm, both in the presence and absence of sequence
dependent setup times. When applied to the equivalent
problem of minimizing makespan in the presence of release
times and delivery times, the worstcase performance of this
algorithm in the absence of sequencedependent setup times
has been shown by Carlier[20] to differ from the optimal
value by at most the value of one of the processing times.
In Chapter V it has been shown that the algorithm will yield
a worstcase solution of twice the optimal value of the
problem without sequencedependent setups if release times
and due dates are agreeable. For arbitrary release times and
due dates the worstcase performance was shown to be three
times the optimal value of the problem without setups.
 Applicability of the algorithm to both the testing
and branding workcenters
136
 Low computational burden ( O(nlogn), where n is the
number of operations to be sequenced) .
Another important component of the methodology is the
algorithm used to solve the longest path problems required
to capture the interactions of the workcenters from the
disjunctive graph representation and the incorporation of
this information into the subproblems. In this
implementation an 0(n 2 ) labelling algorithm was used to
avoid having to renumber the nodes in a topological order.
This algorithm constitutes by far the greatest part of the
computational burden. Use of the 0(n) procedure developed by
Adams et al.[l] would result in substantial savings.
It should be noted that the methodology is not
guaranteed to yield a feasible solution, i.e. , an acyclic
graph, at each iteration. The reason for this is that fixing
the disjunctive arcs associated with a particular workcenter
in an acyclic manner does not guarantee that the directed
graph for the whole facility is acyclic. In order to handle
this problem, each time a machine was scheduled the
resulting directed graph for the entire facility was checked
for cycles. If a cycle had occurred, the machine responsible
was identified and rescheduled. This was done by finding
which operation was being completed before its predecessor,
updating the release time of that operation to that machine
with the completion time of the predecessor and reapplying
the Extended Jackson Rule to that machine. In all but one of
137
the sample problems solved, this procedure resulted in
infeasibilities being resolved. However, since a number of
the local problems may need to be resolved, this leads to a
certain increase in computational burden. A more efficient
approach to dealing with this problem will form part of
future research.
We can now summarize the prototype approximation
methodology as follows:
Step l; Represent the facility using a disjunctive
graph as described in Chapter IV. Obtain an initial set of
release times and due dates for each workcenter by solving
longest path problems in the graph corresponding to the
situation where no machine has had its schedule fixed.
Step 2 : Seguence each workcenter whose schedule has not
yet been fixed using the Extended Jackson Rule. Select the
workcenter with largest Lmax value as the most critical. Fix
the schedule of this workcenter and check for cycles. If a
cycle is found, reschedule the relevant workcenter to
restore feasibility.
Step 3 : For each workcenter already scheduled, perform
the following steps:
 Update the release time and due date information
using the longest path procedure as described in Chapter IV,
 Resequence the workcenter using the new information
thus obtained, again using the Extended Jackson Rule,
138
 Check for cycles in the resulting graph and
reschedule to eliminate those found.
In this implementation, Step 3 was performed twice for
all unscheduled workcenters at that iteration.
Step 4 : If all workcenters have been scheduled, stop.
Else, go to Step 2.
We shall now present the results of a preliminary
computational study of this implementation using data
derived from a real test environment.
Computational Testing
The methodology outlined above was run on 24 test
problems and compared with an Earliest Due Date dispatching
rule. The dispatching rule gives priority to the available
operation whose predecessors have already been completed
that belongs to the lot having the earliest due date. During
this experiment it was assumed that all test workcenters had
equal workload and that all lots to be scheduled were
available simultaneously.
Information on process and setup times was derived from
data gathered in the actual facility motivating this
research. This information is summarized below:
Process Times
Initial room temp, test 6 seconds/chip
Low temp, test 8 seconds/ chip
139
High temp, test 8 seconds/chip
Brand 0.5 seconds/ chip
Setup Times
Sequencedependent setup times occur only for the
testing workcenters. For operations i and j carried out on
the same tester, the setup time required to change from i to
j , s r} , was assumed to be uniformly distributed over the
interval [0,p.], where p. } is the processing time of the
operation being set up. This assumption is based on
observation of the production equipment in use, and was also
used in the worstcase analysis of the Extended Jackson Rule
in Chapter V.
Lot Sizes
Exponentially distributed with a mean of 800 chips per lot.
Due Dates
Uniformly distributed between 960 minutes and 5760 minutes.
This is based on the assumption of a 480 minute shift, and
allows for lots to be already late when they arrive at the
testing facility. The upper limit corresponds to four three
shift days, and is derived from current practice.
Experimental Results
We examined two facilities, one consisting of two test
systems and a brand workcenter and the other of four test
systems and a brand workcenter. Each configuration was
examined with different numbers of lots per tester. For each
140
combination of facility configuration and number of lots per
tester, five randomly generated problems were solved using
the approximation methodology and the dispatching rule. Of
these, one problem was discarded due to the approximation
methodology yielding an infeasible solution, and another due
to biased input data (small lot sizes resulting in
processing times of zero for one lot) . The results are
summarized in Table 7.1. The dispatching rule value was used
as a baseline for comparison. Hence the percentage deviation
of the maximum lateness obtained by the approximation
methodology from the value obtained by the dispatching rule
is given. This value is calculated as
Lmax (approximation methodology)  Lmax (dispatching rule)
Lmax (dispatching rule)
The results shown in Table 7.1 indicate that the
approximation methodology provides the same quality of
solution on average as the dispatching rules. This is
encouraging, since in their study, Adams et al.[l] found
that their approximation approach yielded improvements of
10% at most over dispatching rules for makespan. Hence our
findings are on the same order as theirs, especially
when the more complex performance measure we are trying to
minimize is considered. The results are even more
encouraging when we note that the implementation described
in this chapter does not solve the local problems to
optimal ity, as did Adams et al.[l].
Table 7.1 Computational Results
141
Lmax (mins)
Machines Lots/Tester
Approximation Disp,
Methodology Rule
%Deviation
2
4
776
776
1271
1189
6.8
1509
1517
0.5
634
634
1976
1810
9.2
2
5
834
834
79
79
1316
1316
1001
1001
4
2
768
768
1532
1532
1081
1081
991
991
4
4
3345
3571
6.3
718
287
150.2
1030
1030
1299
1299
1354
1409
3.9
4
5
2757
2757
1320
1700
22.4
2288
1670
37.0
1496
3234
53.7
1696
1288
31.0
142
It should be noted that the algorithm used to schedule
the workcenters is a heuristic, not an optimal procedure.
Furthermore, it does not take the sequencedependent setup
times into consideration. The use of a heuristic that takes
setup times into account should improve the results greatly.
This could be done by applying the neighborhood search
procedure described in Chapter V to the schedule obtained by
the Extended Jackson Rule for the critical machine at each
iteration.
Another point is that the Extended Jackson Rule uses
the due dates and release times estimated using the longest
path procedure and the disjunctive graph. The accuracy of
these estimates is of extreme importance to the
effectiveness of the overall methodology. As was discussed
in Chapter IV, the longest path procedure will tend to
underestimate lead times since it ignores machine
interference effects at machines not yet scheduled. The
multiterminal structure of the graph representation used,
compared to the single sink structure used when minimizing
makespan, further complicates this problem. Better
techniques for estimating release times and due dates for
the local problems should lead to enhanced performance of
the approximation methodology.
From a computation time perspective, the dispatching
rules are unquestionably much faster. The use of an 0(n)
procedure for the longest path problems solved at each step
143
of the approximation methodology would greatly reduce the
difference in computational effort. The more efficient
coding of the algorithms involved would also help alleviate
the differences.
Summary and Conclusions
In this chapter we have demonstrated the implementation
of the overall approximation methodology in a realistic
scenario and compared its performance with a dispatching
rule similar to those currently used in industry. The
approximation methodology has been shown to yield comparable
solutions, and directions to further improve the quality for
the solutions obtained and the computational efficiency have
been outlined.
The results of this chapter demonstrate that
approximation methodologies of this type have the potential
for application in realworld environments and that, with
the development of more efficient algorithms for the various
workcenter problems and duedate estimation issues, should
provide schedules superior to those obtained from
dispatching rules. The results of this study and those of
Adams et al.[l] indicate that improvements of the order of
510% on average over dispatching rules can be expected. The
benefits resulting from improved schedules must be weighed
against the extra computational effort required by the
methodology. However, especially in heavily loaded
144
facilities, improvements of the order of 5% in performance
measures like maximum lateness or number of tardy jobs could
lead to substantial improvements in customer service which
would justify the extra computational effort. Especially
when implemented on a rolling horizon basis in a dynamic
environment, where new schedules are generated say once a
shift instead of in real time, these methods offer an
alternative to dispatching rules for scheduling complex
facilities.
CHAPTER VIII
SUMMARY AND FUTURE DIRECTIONS
Summary of Accomplishments
The purpose of this research has been to develop
production scheduling methodologies for a class of job shop
problems whose structure is derived from semiconductor test
operations. Chapter II outlined the general semiconductor
manufacturing process, and described the testing process
which provides the motivation for the problems examined in
this study. Chapter III presented a review of the relevant
literature in the areas of scheduling theory and
semiconductor manufacturing. Based on this review, the
contributions of this research to the areas of scheduling
theory and operations management in the semiconductor
industry were assessed.
In Chapter IV the general problem solving scheme by
which the problem of scheduling the job shop under study
will be solved was outlined. This is an approximation method
similar to the Shifting Bottleneck approach of Adams et
al.[l], which iteratively solves a set of subproblems in
which individual workcenters are scheduled. This approach is
extended in a number of ways in this chapter to allow for
145
146
latenessrelated performance measures, sequencedependent
setup times and different types of workcenters. A number of
important issues in the development of such a methodology
were also discussed in this chapter.
Chapter V presented detailed formulations and
complexity classifications for a class of scheduling
problems arising from the need to schedule workcenters
consisting of a single machine. These problems are all NP
hard and are characterized by sequencedependent setup
times, precedence constraints and latenessrelated
performance measures. Optimal implicit enumeration
algorithms were provided for the cases where all jobs are
available simultaneously. Heuristics were developed and
analyzed for a number of other cases and tight error bounds
were derived for thSse heuristics under extremely mild
assumptions on the relation between setup and processing
times. The work in this chapter constitutes the first
analysis of heuristic algorithms for scheduling problems
with latenessrelated performance measures and sequence
dependent setup times to date in the literature.
In Chapter VI another class of scheduling problems
little examined to date, those of scheduling batch
processing machines, is examined. These problems are
motivated by the need to schedule burnin operations in the
semiconductor test environment. Polynomialtime solution
procedures are presented for a number of problems involving
147
single and parallel identical batch processing machines. A
heuristic developed for the case of parallel identical
machines is extended to parallel batch processing machines
and a comprehensive complexity classification of these
problems is given.
Chapter VII provides a sample implementation of the
overall approximation methodology using some of the results
and algorithms developed in the previous chapters in a
scenario based as closely as possible on a real testing
facility. The results of the methodology are compared with
the results obtained by the current practice, and the
viability of the approach demonstrated.
In the rest of this chapter we shall examine how the
work in this dissertation can be extended. These future
directions will be discussed under three main headings:
single and parallel machine problems, batch scheduling
problems and the extension of the approximation methodology.
Single and Parallel Machines
As was described in Chapter V, the majority of the
singlemachine problems of interest in this research are NP
hard even when all jobs are available simultaneously. We
have developed implicit enumeration algorithms for the
problems of minimizing Lmax and XXJ. when all lots are
available simultaneously. We also develop heuristics with
148
tight worstcase error bounds for the problem of minimizing
Lmax with dynamic job arrivals.
Results to date indicate that it is difficult to
develop efficient branch and bound algorithms for these
problems. The reason for this is that it is difficult to
obtain tight lower bounds. Experimentation with Algorithm BB
developed in Chapter V for the l/prec,SDST/Lmax problem
indicates that bounds increase very slowly as enumeration
proceeds and that the lower bounds are in general weak.
Another factor is that the subproblems that must be solved
to yield lower bounds are often themselves NPhard. The
presence of nonsimultaneous release times makes these
difficulties even more acute.
An interesting direction to explore in future work is
the application of integer programming techniques developed
in recent years for the TSP. These methods make use of facet
cuts in order to generate sharp lower bounds which can be
used in branch and bound algorithms. For example,
formulating the l/prec,SDST/Lmax problem as a TSP with time
window constraints and different objective functions would
allow some of the new integer programming technology to be
brought to bear on this problem.
The successful extension of heuristics developed for
problems of minimizing Lmax without sequencedependent setup
times is not likely to carry over to the problems of
minimizing ELK . This performance measure is in general more
149
difficult to optimize than Lmax, and the presence of
sequencedependent setup times renders invalid a great
majority of the results used to develop good lower bounds
for the cases without sequencedependent setups.
A natural extension of this work is to examine measures
of performance related to tardiness. While the l//ST i
problem is NPhard even without sequencedependent
setups [59 , 60] , enumeration algorithms and heuristics may be
developed and analyzed, as well as incorporated into the
overall approximation methodology.
The problems of minimizing Lmax and SU^ on parallel
identical machines are both NPhard in the strong sense even
without precedence constraints or sequencedependent
setups [59, 60] . Thus, the problems of interest with
precedence constraints and sequencedependent setup times,
P/prec,SDST/Lmax and P/prec, SDST/SU,. , are both NPhard in
the strong sense. However, Gusfield[50] has developed worst
case error bounds for the P/r./Lmax problem. The extension
of these algorithms to the case with sequencedependent
setups may be possible under certain assumptions on the
nature of the setup times.
Batch Processing Machines
In Chapter VI we presented polynomialtime solution
procedures for a number of problems dealing with the
scheduling of single and parallel batch processing machines.
150
The results of this work are summarized in the table at the
end of that chapter. A direction for future research is the
examination of the open problems, 1/B/SU,. and 1/B/Tmax, to
either provide rigorous proof of their NPhardness or a
polynomialtime solution procedure. Further work can be done
on the development and analysis of heuristics for the
problems known to be NPcomplete, such as was done for the
P/B/Cmax problem. The problems of minimizing Lmax and SUj on
parallel batch machines are NPhard even for a batch size of
one [59, 60], but polynomial and/or pseudopolynomial solution
procedures may be obtainable under certain assumptions on
the processing times and due dates. The problem of
minimizing total tardiness on a single batch processing
machine, l/B/ETj , is also in the same situation.
The work on batch processing machines can also be
extended by generalizing the model of batch processing
machines used in this dissertation. A critical assumption in
the work to date is that all jobs reguire exactly the same
amount of oven capacity, i.e., are of the same size. In
reality, however, the amount of oven capacity a job may
require depends on the size of the lot of chips represented
by that job.
Another interesting direction for future work is the
consideration of multistage systems containing batch
processing machines. Ahmadi et al.[2] have examined a number
of problems in this area with the goal of minimizing
151
flowtimerelated performance measures. However, the batch
machine model they use is somewhat simpler than the one used
in this dissertation. While a great many of the scheduling
problems that will arise in these systems are likely to be
NPhard, it may be possible to develop polynomialtime
algorithms for special cases and to develop and analyze good
heuristics.
Overall Approximation Scheme
In Chapter IV the problem of scheduling a semiconductor
test facility was formulated as a job shop scheduling
problem and the overall approximation scheme to be used in
its solution presented. The approximation methodology is
similar to the Shifting Bottleneck procedure of Adams et
al.[l], but is extended in a number of directions. A sample
implementation of the approximation scheme was illustrated
in Chapter VII.
The first step in any future research in this area is
to work out the implementation details for the methodology
when a number of different types of workcenter are present.
Specifically, modification of the disjunctive graph used to
represent the state of the job shop to enable representation
of parallel identical machines and single and parallel batch
processing machines need to be examined. Bartholdi et
al.[15] have examined a number of these issues but further
research is needed. Another area of interest for future work
152
is the due date estimation mechanism. The current mechanism,
based on the longest path calculations as the schedule is
constructed, ignores machine interference effects at the
machines not yet scheduled. Vepsalainen and Morton [95] have
shown that including an estimate of waiting times at
machines not yet visited into the due date estimation
improves performance considerably.
The methodology in its present state addresses a static
problem, where it is assumed that all lots to be processed
over a planning horizon are available at time zero. However,
in most realworld manufacturing environments, the arrival
of jobs to a production facility is dynamic. This is
certainly true in the semiconductor industry, where lots are
constantly arriving at the test area from a number of
different sources. Hence the extension of the methodology
developed in this dissertation to handle the case of the
dynamic job shop would be a valuable contribution.
An important characteristic of the semiconductor
testing process is that since processing times are long, job
arrivals do not occur at very frequent intervals.
Interarrival times of lots at testing workcenters have been
found to be of the order of hours rather than minutes or
seconds. Thus the eighthour shift constitutes a valid time
frame for scheduling purposes. Given this insight, and the
fact that jobs arrive over the course of a shift, we can
implement the approximation methodology on a rolling horizon
153
basis. The main idea is that at the beginning of each shift
the current state of the shop, i.e., available jobs and
times when machines will complete processing the job
currently being worked on and become available, will be
viewed as a static problem. This static problem will be
solved using the methodology and the solution implemented
for that shift. At the start of the next shift, the system
status will be updates and the procedure repeated. Rolling
planning horizon approaches have been extensively examined
in the context of production planning and dynamic lot sizing
problems (see Lee and Denardo[69] for an example) . Raman et
al.[87] give an example of such an implementation in the
scheduling of a general flexible manufacturing system.
The extension of the methodology to a rolling horizon
implementation poses some interesting problems. Due to the
fact that preemption is not possible in the testing
operation, jobs occupying machines at the beginning of each
shift must be completed before new work can be assigned to
that machine. This leads to nonsimultaneous ready times for
machines as well as jobs. Extension of the methodology to
handle this feature would require the development of
solution procedures that were capable of handling non
simultaneous machine and job availability. The work of
Lee [70] examining parallel identical machine scheduling with
simultaneous job availability but nonsimultaneous machine
154
availability seems to be the only related research in the
literature.
Finally, the point should be made that the methodology
as developed in Chapter IV is an extremely flexible one.
Given a particular facility, an engineer can design his own
implementation by putting together the various algorithms
for the subproblems representing the various workcenters to
be scheduled. The prototype implementation described in
Chapter VII is an example. The level of detail in the
representation of the facility, e.g., should a group of
three identical test systems be treated as one workcenter or
as three, is up to the user, although data reguirements
should be taken into account. Given the large mainframe
computers used to run commercial CIM systems such as COMETS
described in Chapter III, the methodology should be capable
of being used effectively in fairly sizeable facilities.
REFERENCES
[1] Adams, J., Balas,E. and Zawack, D. , "The Shifting
Bottleneck Procedure for JobShop Scheduling,"
Management Science Vol.34, No. 3, 391401, 1988.
[2] Ahmadi , J . H . , Ahmadi,R.H. , Dasu,S. and Tang,C.S.,
Batching and Scheduling Jobs on Batch and Discrete
Processors , Anderson School of Management,
University of California, Los Angeles, 1989.
[3] Baker, K.R. . Introduction to Sequencing and Scheduling ,
Wiley, New York, 1974.
[4] Baker, K.R. , "Sequencing Rules and Due Date Assignments
in a Job Shop," Management Science Vol.30, No. 9,
10931104, 1984.
[5] Baker, K. R. , "An Elimination Method for the FlowShop
Problem," Operations Research Vol.23, No.l, 159
162, 1975.
[6] Baker, K.R. , "A Comparative Study of Flowshop
Algorithms," Operations Research Vol.23, No.l, 62
73, 1975.
[7] Baker, K.R. and Su, Z.S. , "Sequencing with Due Dates and
Early Start Times to Minimize Maximum Tardiness,"
Naval Research Logistics Quarterly , Vol.21, 171
176, 1974.
[8] Balas,E. , "Machine Sequencing via Disjunctive Graphs: An
Implicit Enumeration Approach," Operations
Research Vol.17, 941957, 1969.
[9] Balas,E., "Project Scheduling with Resource
Constraints," in Applications of Mathematical
Programming Techniques , E.M.L. Beale (ed.), The
English Universities Press Ltd, London, 1970
[10] Balas,E. , "Machine Sequencing: Disjunctive Graphs and
DegreeConstrained Subgraphs," Naval Research
Logistics Quarterly Vol.17, No. 1, 1970.
155
156
[11] Balas,E.," On the Facial Structure of Scheduling
Polyhedra," Mathematical Programming Study Vol.24
179218, 1985.
[12] Balas,E. and Toth, P. , "Branch and Bound Methods" in The
Travelling Salesman Problem , E.L. Lawler, A. E.G.
Rinnooy Kan, J.K. Lenstra, D.B. Shmoys(eds), John
Wiley, New York, 1985.
[13] Barker, J. R. and McMahon,G. B. , "Scheduling the General
Job Shop," Management Science Vol.31, No. 5, 594
598, 1985.
[14] Barnes, J. W, and Vanston, L.K. , "Scheduling Jobs with
Linear Delay Penalties and SequenceDependent
Setup Costs," Operations Research Vol.29, No. 1,
146160, 1981.
[15] Bartholdi, J. J. , Lofgren,C. and Sigismondi,G. ,
Scheduling Technology for Semiconductor
Fabrication, Tech Report NSF SBIR Grant, 1988.
[16] Bitran,G.R. and Tirupati, D. , " Planning and Scheduling
for Epitaxial Wafer Production," Operations
Research Vol.36, No.l, 3449, 1988.
[17] Bitran,G. and Tirupati, D. , "Development and
Implementation of a Scheduling System for a Wafer
Fabrication Facility," Operations Research Vol.36,
No. 3, 377395, 1988.
[18] Burman, D.Y., GurrolaGal, F. J. , Nozari,A., Sathaye,S.
and Sitarik, J. P. , "Performance Analysis Techniques
for IC Manufacturing Lines," AT&T Bell Labs
Technical Journal Vol.65, 4656, 1986.
[19] Buxey,G. , "Production Scheduling: Practice and Theory,"
European Journal of Operational Research Vol. 39,
No.l, 1731, 1989.
[20] Carlier, J. , "The OneMachine Scheduling Problem,"
European Journal of Operational Research Vol.11,
4247, 1982.
[21] Carlier, J. and Pinson, E. , "An Algorithm for Solving the
JobShop Problem," Management Science Vol.35,
No. 2, 164176, 1989.
[22] Charlton, J. M. and Death, C. C. , "A Generalized Machine
Scheduling Algorithm," Operations Research
Quarterly Vol.21, No.l, 127134, 1970.
157
[23] Charlton, J. M. and Death, C. C. , "A Method of Solution for
General Machine Scheduling Problems," Operations
Research Vol.18, No. 4, 689707, 1970.
[24] Chen, H. , Harrison, J. M. , Mandelbaum, A. , Van Ackere,A. and
Wein,L.M., "Empirical Evaluation of a Queueing
Network Model for Semiconductor Wafer
Fabrication," Operations Research Vol.36, No. 2,
202215, 1988.
[25] Cheng, T.C.E. and Gupta, M.C., "Survey of Scheduling
Research involving Due Date Determination
Decisions," European Journal of Operational
Research Vol.38, 156166, 1989.
[26] Coffman,E.G. (ed. ) , Computer and Job Shop Scheduling
Theory , Wiley, New York, 1976.
[27] Consilium Inc. . ShortInterval Scheduling System User's
Manual, Internal Publication, Mountain View, CA.
[28] Conway, R.N. , Maxwell, W.L. and Miller , L.W. . Theory of
Scheduling . AddisonWesley, Reading, MA, 1967.
[29] Corwin,B.D. and Esogbue, A. 0. , "Two Machine Flow Shop
Scheduling Problems with Sequence Dependent Setup
Times: A Dynamic Programming Approach," Naval
Research Logistics Quarterly Vol.21, 515524,
1974.
[30] Dannenbring,D.G. , "Procedures for Estimating Optimal
Solution Values for Large Combinatorial Problems,"
Management Science Vol.23, No. 12, 12731283, 1977.
[31] Dannenbring,D.G. , "An Evaluation of Flow Shop Sequencing
Heuristics," Management Science Vol.23, No. 11,
11741182, 1977.
[32] Dayhoff,J.E. and Atherton,R.W. , "Simulation of VLSI
Manufacturing Areas," VLSI Design , 8492, December
1984
[33] Dayhoff,J.E. and Atherton,R.W. , "A Model for Wafer
Fabrication Dynamics in Integrated Circuit
Manufacturing," IEEE Transactions on Systems, man
and Cybernetics Vol. 17 , No.l, 91100, 1987.
[34] Dayhoff,J.E. and Atherton,R.W. , "Signature Analysis of
Dispatch Schemes in Wafer Fabrication," IEEE
Transactions on Components. Hybrids and
Manufacturing Technology Vol.9, No. 4, 518525,
1986.
158
[35] Dayhoff ,J.E. and Atherton , R . W . , "Signature Analysis:
Simulation of Inventory, Cycle Time and Throughput
Tradeoffs in Wafer Fabrication," IEEE Transactions
on Components, Hybrids and Manufacturing
Technology Vol.9, No. 4, 498507, 1986.
[36] Deb,R.K. and Serfozo,R. F. , "Optimal Control of Batch
Service Queues," Advances in Applied Probability
Vol.5, 340361, 1973.
[37] Dempster, M.A.H. , Lenstra, J.K. and Rinnooy Kan, A.H.G.
(Eds) , Deterministic and Stochastic Scheduling ,
D.Reidel, Boston, 1982.
[38] Dietrich, B. L. , Scheduling on Parallel Unrelated
Machines with Setups . Res. Rep. RC 14279, IBM
T.J. Watson Research Center, Yorktown Heights, NY,
1988.
[39] Driscoll,W.C. and Emmons, H. , "Scheduling Production on
One Machine with Changeover Costs," AIIE
Transactions Vol.9, No. 4, 388395, 1977.
[40] Florian,M. , Trepant,P. and McMahon,G. , "An Implicit
Enumeration Algorithm for the Machine Sequencing
Problem," Management Science Vol.17, No. 12, B782
B792, 1971.
[41] French , S . , Seguencing and Scheduling: An Introduction to
the Mathematics of the JobShop , Wiley, New York,
1982.
[42] Garey,M.R. and Johnson , D . S . , Computers and
Intractability . W.H. Freeman & Co. , San Francisco,
1979.
[43] Geoffrion,A.M. and Graves, G.W., "Scheduling Parallel
Production Lines with Changeover Costs: Practical
Application of a Quadratic Assignment/LP
Approach," Operations Research Vol.24, No. 4, 595
610, 1976.
[44] Glassey,C.R. and Resende,M.G. C. , "ClosedLoop Job
Release Control for VLSI Circuit Manufacturing,"
IEEE Transactions on Semiconductor Manufacturing
Vol.1, No.l, 3646, 1988.
[45] Glassey,C.R. and Resende,M.G. C. , "A Scheduling Rule for
Job Release in Semiconductor Fabrication,"
Operations Research Letters Vol.7, No. 5, 213217,
1988.
159
[46] Golden, B.L. and Alt, F. B. , "Interval Estimation of a
Global Optimum for Large Combinatorial Problems,"
Naval Research Logistics Quarterly Vol.26, 6977,
1979.
[47] Golden, B.L. and Stewart, W. R. , "Empirical Analysis of
Heuristics," in The Travelling Salesman Problem" ,
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan,
D.B. Shmoys(eds), John Wiley, New York, 1985.
[48] Graham, R.L., "Bounds on Multiprocessor Timing
Anomalies," SIAM Journal of Applied Mathematics
Vol.17, 416429, 1969
[49] Gupta, J.N. D. , "Flowshop Schedules with Sequence
Dependent Setup Times," Journal of the Operations
Research Society of Japan Vol.29, No. 3, 206219,
1986.
[50] Gusfield,D. , "Bounds for Naive Multiple Machine
Scheduling with Release Times and Due Dates,"
Journal of Algorithms , Vol.5, 16, 1984.
[51] Hall,L. and Shmoys, D. , Jackson' s Rule for Onemachine
Scheduling: Making a Good Heuristic Better ,
Research Report No. OR 19989, Operations Research
Center, Massachusetts Institute of Technology,
Cambridge, August 1989.
[52] Hariri, A.M. A. and Potts, C.N., "A Branch and Bound
Algorithm to Minimize the Number of Late Jobs in a
Permutation Flowshop," European Journal of
Operational Research Vol.38, 228237, 1989.
[53] Heck, H. W. , Computational Schemes for Flowshops ,
Unpublished Ph.D Dissertation, Dept. of Industrial
and Systems Engineering, University of Florida,
1973.
[54] Heck,H.W. and Roberts , S . D . , Pis'] unct ive Graph Algorithms
for Resource Constrained Sequencing in Flowshops:
Maximum Flow Time and Maximum Tardiness , Project
Themis Technical Report No. 47, Dept. of Industrial
and Systems Engineering, University of Florida,
Gainesville, October 1970.
[55] Heck,H.W. and Roberts , S . D. , Disn unct ive Graph Algorithms
for Average Flow Time and Average Tardiness ,
Project Themis Technical Report No. 48, Dept. of
Industrial and Systems Engineering, University of
Florida, Gainesville, October 1970.
160
[56] Ikura,Y. and Gimple,M. , "Efficient Scheduling
Algorithms for a Single Batch Processing Machine,"
Operations Research Letters Vol.5, No. 2, 6165,
1986.
[57] Irani,S.A., Gunasena,U. , Davachi,A. and Enscore,E. E. ,
"SingleMachine SetupDependent Sequencing using a
Complexity Ranking Scheme," Journal of
Manufacturing Systems Vol.7, No.l, 1122, 1989.
[58] Kise,H., Ibaraki,T. and Mine,H., "A Solvable Case of
the OneMachine Scheduling Problem with Ready and
Due Times," Operations Research Vol.2 6 No.l, 121
126, 1978
[59] Lageweg , B . J . , Lawler , E. L. , Lenstra, J.K. , and Rinnooy
Kan, A.H.G. , ComputerAided Complexity
Classification of Combinatorial Problems . Report
No. BW137, Mathematisch Centrum, Amsterdam, 1981.
[60] Lageweg,B. J. , Lawler, E.L., Lenstra, J.K. and Rinnooy
Kan , A . H . G . , ComputerAided Complexity
Classification of Deterministic Scheduling
Problems, Report No. BW138, Mathematisch Centrum,
Amsterdam, 1981.
[61] Lageweg , B . J . , Lenstra, J.K. and Rinnooy Kan,A.H.G., "Job
Shop Scheduling by Implicit Enumeration,"
Management Science Vol.24, No. 4, 441450, 1977.
[62] Lageweg, B. J. , Lenstra, J.K. and Rinnooy Kan,A.H.G., "A
General Bounding Scheme for the Permutation
Flowshop Problem," Operations Research Vol.26,
No.l, 5367, 1978.
[63] Lawler, E. , "On Scheduling Problems with Deferral Costs,"
Management Science Vol.11, No. 2, 280288, 1964.
[64] Lawler, E. , "Optimal Sequencing of a Single Machine
subject to Precedence Constraints," Management
Science Vol.19, 544546, 1973.
[65] Lawler, E. L. , " Sequencing to Minimize the Weighted
Number of Tardy Jobs," Revue Francaise
Automatigue. Informatigue et Recherche
Operationelle Vol.10, 2733, 1976.
[66] Lawler, E.L. and and Moore, J.M. , "A Functional Equation
and its Application to Resource Allocation and
Sequencing Problems," Management Science Vol.16,
7784, 1969
161
[67] Leachman,R.C. , Preliminary Design and Development of a
CorporateLevel Production Planning System for the
Semiconductor Industry , OR Center, University of
California, Berkeley, February 1986.
[68] Leachman,R.C. , Solorzano,M. and Glassey r R.C. , A Queue
Management Policy for the Release of Factory Work
Orders , presented at ORSA/TTMS Conference,
Vancouver, May 1989
[69] Lee,C.Y. and Denardo,E., "Rolling Planning Horizons:
Error Bounds for the Dynamic Lot Size Model,"
Mathematics of Operations Research Vol.11, No. 3,
423432, 1986.
[70] Lee,C.Y., "Parallel Machine Scheduling with Non
Simultaneous Machine Available Time," to appear in
Discrete Applied Mathematics f 1990.
[71] Lenstra, J.K. , Seguencing by Enumerative Methods ,
Mathematical Centre Tract 69, Mathematisch
Centrum, Amsterdam, 1977.
[72] Lenstra, J.K. and Rinnooy Kan, A. H. G. , "Complexity of
Scheduling under Precedence Constraints,"
Operations Research Vol. 28 , No.l, 2235, 1978.
[73] Lockett,A.G. and Muhlemann, A. P. , "A Scheduling Problem
Involving SeguenceDependent Changeover Times,"
Operations Research Vol.20, 895902, 1972.
[74] Lozinski,C. and Glassey, C.R. , "Bottleneck Starvation
Indicators for ShopFloor Control," IEEE
Transactions on Semiconductor Manufacturing Vol.1,
No. 4, 147153, 1988.
[75] Matsuo,H., Suh, C.J. and Sullivan, R.S . , A Controlled
Search Simulated Annealing Approach for the
General Jobshop Scheduling Problem , Working Paper
No. 030488, Dept. of Management, Graduate School
of Business, University of Texas at Austin, 1988
[76] McMahon,G. and Florian,M. , "On Scheduling with Ready
Times and Due Dates to Minimize Maximum Lateness,"
Operations Research Vol.23, No. 3, 475482, 1975.
[77] Medhi,J., "Waiting Time Distribution in a Poisson Queue
with a General Bulk Service Rule," Management
Science Vol.21, No. 7, 777782, 1975.
162
[78] Monma,C. and Potts, C.N. , "On the Complexity of
Scheduling with Batch Setup Times," Operations
Research Vol.37 No. 5, 798804, 1989.
[79] Morton, T, Lawrence, S. and Kekre,S., "SCHEDSTAR: A
Price Based Shop Scheduling Module," Journal of
Manufacturing and Operations Management Vol.1,
131181, 1988.
[80] Neuts,M., "A General Class of Bulk Queues with Poisson
Input," Annals of Mathematical Statistics Vol.38,
759770, 1967.
[81] Panwalkar,S.S. and Iskander, W.,"A Survey of Scheduling
Rules," Operations Research Vol.25, No.l, 4561,
1977.
[82] Park,Y.B., Pegden,C.D. and Enscore,E.E. , "A Survey and
Evaluation of Static Flowshop Scheduling
Heuristics," International Journal of Production
Research Vol.22, No.l, 1271451, 1984.
[83] Parker, R.G., Deane,R.H. and Holmes, R. A. , "On The Use of
a Vehicle Routing Algorithm for the Parallel
Processor Problem with SeguenceDependent
Changeover Costs," AIIE Transactions June 1977,
pp. 155160, 1977.
[84] Picard,J.C. and Queyranne,M. , "The TimeDependent
Travelling Salesman Problem and its Application to
the Tardiness Problem in OneMachine Scheduling,"
Operations Research Vol.26, No.l, 86110, 1978.
[85] Potts, C. N. , "Analysis of a Heuristic for One Machine
Sequencing with Release Dates and Delivery Times,"
Operations Research Vol.28, 14361441, 1980.
[86] Potts, C.N. and Van Wassenhove, L.N. , "Algorithms for
Scheduling a Single Machine to Minimize the
Weighted Number of Late Jobs," Management Science
Vol.34, 843858, 1988
[87] Raman, N. , Talbot, F.B. and Rachamadugu,R. V. , "Due Date
Based Scheduling in a General Flexible
Manufacturing System," Journal of Operations
Management Vol.8, No. 2, 115132, 1989.
[88] Rodammer , F . A . and White, K.P.,"A Recent Survey of
Production Scheduling," IEEE Transactions on
Systems, Man and Cybernetics Vol.18, No. 6, 841
851, 1988.
163
[89] Spence, A.M., Welter, D.J. , " Capacity Planning of a
Photolithography Work Cell in a Wafer
Manufacturing Line," Proceedings of the IEEE
Conference on Robotics and Automation , 702708,
1987
[90] Sze,S.M., VLSI Technology , McGrawHill, New York, 1983.
[91] Szwarc,W. , "Optimal Elimination Methods in the m x n
FlowShop Scheduling Problem," Operations Research
Vol.21, 12501259, 1973.
[92] Uskup,E. and Smith, S. B. , "A Branch and Bound Algorithm
for TwoStage Production Seguencing Problems,"
Operations Research Vol.23, No.l, 118136, 1975.
[93] Van Laarhoven,P. J.M. , Aarts,E.H.L. and Lenstra, J. K. ,
Job Shop Scheduling by Simulated Annealing , Res.
Rep OSR8809, Centre for Mathematics and Computer
Science, Amsterdam, 1988.
[94] Vepsalainen,A.P. J. and Morton, T.E., "Priority Rules for
Job Shops with Weighted Tardiness Costs,"
Management Science Vol.33, No. 8, 10351047, 1987
[95] Vepsalainen,A.P. J. and Morton, T. E. , "Improving Local
Priority Rules with Global Leadtime Estimates: A
Simulation Study," Journal of Manufacturing and
Operations Management Vol.1, No.l, 102118, 1988.
[96] Villarreal,F. J. and Bulfin,R.L., "Scheduling a Single
Machine to Minimize the Weighted Number of Tardy
Jobs," HE Transactions Vol.15. 337343, 1983.
[97] Wein,L.M., "Scheduling Semiconductor Wafer
Fabrication," IEEE Transactions on Semiconductor
Manufacturing Vol . 1 . No. 3, 115129, 1988.
[98] White, C.H. and Wilson, R. C. , "SequenceDependent Setup
Times and Job Sequencing," International Journal
of Production Research Vol.15, No. 2, 191202,
1977.
[99] Wittrock,R. J. . Scheduling Parallel Machines with Setups ,
Research Report RC 1174 0, IBM Thomas J. Watson
Research Center, Yorktown Heights, NY 10598, 1986.
BIOGRAPHICAL SKETCH
Reha Uzsoy was born on April 21, 1963, in London, U.K.
He received a bachelor's degree in industrial engineering in
June 1984, a bachelor's degree in mathematics in June 1985
and a Master of Science in industrial engineering in June
1986, all from Bogazici University in Istanbul, Turkey. He
was admitted to the Department of Industrial and Systems
Engineering at the University of Florida in August 1986, and
has been studying toward the Ph.D degree since that date.
164
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
&f
'artih Vega , Chairman
Louis A.
Associate Professor of Industrial
and Systems Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
~~ ye*2L J*
•ce.
ChungYee Lee, Chairman
Associate Professor of Industrial
and Systems Engineering
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Donald \Jy. Elzinga
Professor of Indus^fi
Systems EngineeriTig
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
Selcuk ErengjAc
Associate Professor of Decision
and Information Sciences
I certify that I have read this study and that in my
opinion it conforms to acceptable standards of scholarly
presentation and is fully adequate, in scope and quality, as
a dissertation for the degree of Doctor of Philosophy.
4>w<xy H IaoJIWa
Sencer Yeralan
Associate Professor of Industrial
and Systems Engineering
This dissertation was submitted to the Graduate Faculty
of the College of Engineering and to the Graduate School and
was accepted as partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
^Winfred M. Phillips
Dean, College of Engineering
Madelyn M. Lockhart
Dean, Graduate School
,
>/
: ; ;